00:00:00.001 Started by upstream project "autotest-per-patch" build number 122925 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.059 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.060 The recommended git tool is: git 00:00:00.060 using credential 00000000-0000-0000-0000-000000000002 00:00:00.062 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.087 Fetching changes from the remote Git repository 00:00:00.088 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.118 Using shallow fetch with depth 1 00:00:00.118 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.118 > git --version # timeout=10 00:00:00.142 > git --version # 'git version 2.39.2' 00:00:00.142 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.143 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.143 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.230 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.240 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.251 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:04.251 > git config core.sparsecheckout # timeout=10 00:00:04.262 > git read-tree -mu HEAD # timeout=10 00:00:04.276 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:04.290 Commit message: "inventory/dev: add missing long names" 00:00:04.291 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:04.368 [Pipeline] Start of Pipeline 00:00:04.380 [Pipeline] library 00:00:04.380 Loading library shm_lib@master 00:00:04.381 Library shm_lib@master is cached. Copying from home. 00:00:04.396 [Pipeline] node 00:00:04.402 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.405 [Pipeline] { 00:00:04.415 [Pipeline] catchError 00:00:04.416 [Pipeline] { 00:00:04.428 [Pipeline] wrap 00:00:04.435 [Pipeline] { 00:00:04.443 [Pipeline] stage 00:00:04.445 [Pipeline] { (Prologue) 00:00:04.664 [Pipeline] sh 00:00:04.949 + logger -p user.info -t JENKINS-CI 00:00:04.970 [Pipeline] echo 00:00:04.971 Node: CYP9 00:00:04.980 [Pipeline] sh 00:00:05.290 [Pipeline] setCustomBuildProperty 00:00:05.299 [Pipeline] echo 00:00:05.300 Cleanup processes 00:00:05.305 [Pipeline] sh 00:00:05.591 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.591 1121774 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.602 [Pipeline] sh 00:00:05.888 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.888 ++ grep -v 'sudo pgrep' 00:00:05.888 ++ awk '{print $1}' 00:00:05.888 + sudo kill -9 00:00:05.888 + true 00:00:05.904 [Pipeline] cleanWs 00:00:05.913 [WS-CLEANUP] Deleting project workspace... 00:00:05.914 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.921 [WS-CLEANUP] done 00:00:05.925 [Pipeline] setCustomBuildProperty 00:00:05.939 [Pipeline] sh 00:00:06.227 + sudo git config --global --replace-all safe.directory '*' 00:00:06.293 [Pipeline] nodesByLabel 00:00:06.295 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.305 [Pipeline] httpRequest 00:00:06.309 HttpMethod: GET 00:00:06.309 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:06.314 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:06.333 Response Code: HTTP/1.1 200 OK 00:00:06.333 Success: Status code 200 is in the accepted range: 200,404 00:00:06.334 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:08.994 [Pipeline] sh 00:00:09.277 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:09.294 [Pipeline] httpRequest 00:00:09.298 HttpMethod: GET 00:00:09.298 URL: http://10.211.164.101/packages/spdk_c7a82f3a8f85977b66695c54c9af3df251f453ae.tar.gz 00:00:09.299 Sending request to url: http://10.211.164.101/packages/spdk_c7a82f3a8f85977b66695c54c9af3df251f453ae.tar.gz 00:00:09.303 Response Code: HTTP/1.1 200 OK 00:00:09.303 Success: Status code 200 is in the accepted range: 200,404 00:00:09.304 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c7a82f3a8f85977b66695c54c9af3df251f453ae.tar.gz 00:00:27.143 [Pipeline] sh 00:00:27.431 + tar --no-same-owner -xf spdk_c7a82f3a8f85977b66695c54c9af3df251f453ae.tar.gz 00:00:29.989 [Pipeline] sh 00:00:30.273 + git -C spdk log --oneline -n5 00:00:30.273 c7a82f3a8 ut/raid: move out raid0-specific tests to separate file 00:00:30.273 d1c04ac68 ut/raid: make the common ut functions public 00:00:30.273 0c4a15f60 ut/raid: remove unused globals and functions 00:00:30.273 7f4657a85 raid: fix race between process starting and removing a base bdev 00:00:30.273 715ca65af raid: don't remove an unconfigured base bdev 00:00:30.287 [Pipeline] } 00:00:30.303 [Pipeline] // stage 00:00:30.311 [Pipeline] stage 00:00:30.312 [Pipeline] { (Prepare) 00:00:30.329 [Pipeline] writeFile 00:00:30.346 [Pipeline] sh 00:00:30.631 + logger -p user.info -t JENKINS-CI 00:00:30.644 [Pipeline] sh 00:00:30.932 + logger -p user.info -t JENKINS-CI 00:00:30.948 [Pipeline] sh 00:00:31.239 + cat autorun-spdk.conf 00:00:31.239 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.239 SPDK_TEST_NVMF=1 00:00:31.239 SPDK_TEST_NVME_CLI=1 00:00:31.239 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.239 SPDK_TEST_NVMF_NICS=e810 00:00:31.239 SPDK_TEST_VFIOUSER=1 00:00:31.239 SPDK_RUN_UBSAN=1 00:00:31.239 NET_TYPE=phy 00:00:31.247 RUN_NIGHTLY=0 00:00:31.252 [Pipeline] readFile 00:00:31.274 [Pipeline] withEnv 00:00:31.276 [Pipeline] { 00:00:31.292 [Pipeline] sh 00:00:31.584 + set -ex 00:00:31.584 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:31.584 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:31.584 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.584 ++ SPDK_TEST_NVMF=1 00:00:31.584 ++ SPDK_TEST_NVME_CLI=1 00:00:31.584 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.584 ++ SPDK_TEST_NVMF_NICS=e810 00:00:31.584 ++ SPDK_TEST_VFIOUSER=1 00:00:31.584 ++ SPDK_RUN_UBSAN=1 00:00:31.584 ++ NET_TYPE=phy 00:00:31.584 ++ RUN_NIGHTLY=0 00:00:31.584 + case $SPDK_TEST_NVMF_NICS in 00:00:31.584 + DRIVERS=ice 00:00:31.584 + [[ tcp == \r\d\m\a ]] 00:00:31.584 + [[ -n ice ]] 00:00:31.584 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:31.584 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:31.584 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:31.584 rmmod: ERROR: Module irdma is not currently loaded 00:00:31.584 rmmod: ERROR: Module i40iw is not currently loaded 00:00:31.584 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:31.584 + true 00:00:31.584 + for D in $DRIVERS 00:00:31.584 + sudo modprobe ice 00:00:31.584 + exit 0 00:00:31.594 [Pipeline] } 00:00:31.611 [Pipeline] // withEnv 00:00:31.618 [Pipeline] } 00:00:31.634 [Pipeline] // stage 00:00:31.643 [Pipeline] catchError 00:00:31.646 [Pipeline] { 00:00:31.661 [Pipeline] timeout 00:00:31.661 Timeout set to expire in 40 min 00:00:31.662 [Pipeline] { 00:00:31.676 [Pipeline] stage 00:00:31.678 [Pipeline] { (Tests) 00:00:31.692 [Pipeline] sh 00:00:31.982 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.982 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.982 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.982 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:31.982 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:31.982 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:31.982 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:31.982 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:31.982 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:31.982 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:31.982 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.982 + source /etc/os-release 00:00:31.982 ++ NAME='Fedora Linux' 00:00:31.982 ++ VERSION='38 (Cloud Edition)' 00:00:31.982 ++ ID=fedora 00:00:31.982 ++ VERSION_ID=38 00:00:31.982 ++ VERSION_CODENAME= 00:00:31.982 ++ PLATFORM_ID=platform:f38 00:00:31.982 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:31.982 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:31.982 ++ LOGO=fedora-logo-icon 00:00:31.982 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:31.982 ++ HOME_URL=https://fedoraproject.org/ 00:00:31.982 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:31.982 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:31.982 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:31.982 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:31.982 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:31.982 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:31.982 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:31.982 ++ SUPPORT_END=2024-05-14 00:00:31.982 ++ VARIANT='Cloud Edition' 00:00:31.982 ++ VARIANT_ID=cloud 00:00:31.982 + uname -a 00:00:31.982 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:31.982 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:35.280 Hugepages 00:00:35.280 node hugesize free / total 00:00:35.280 node0 1048576kB 0 / 0 00:00:35.280 node0 2048kB 0 / 0 00:00:35.280 node1 1048576kB 0 / 0 00:00:35.280 node1 2048kB 0 / 0 00:00:35.280 00:00:35.280 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:35.280 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:35.280 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:35.280 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:35.280 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:35.280 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:35.280 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:35.280 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:35.280 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:35.280 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:35.280 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:35.280 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:35.280 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:35.280 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:35.280 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:35.280 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:35.280 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:35.280 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:35.280 + rm -f /tmp/spdk-ld-path 00:00:35.280 + source autorun-spdk.conf 00:00:35.280 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.280 ++ SPDK_TEST_NVMF=1 00:00:35.280 ++ SPDK_TEST_NVME_CLI=1 00:00:35.280 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.280 ++ SPDK_TEST_NVMF_NICS=e810 00:00:35.280 ++ SPDK_TEST_VFIOUSER=1 00:00:35.280 ++ SPDK_RUN_UBSAN=1 00:00:35.280 ++ NET_TYPE=phy 00:00:35.280 ++ RUN_NIGHTLY=0 00:00:35.280 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:35.280 + [[ -n '' ]] 00:00:35.280 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:35.280 + for M in /var/spdk/build-*-manifest.txt 00:00:35.280 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:35.280 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:35.280 + for M in /var/spdk/build-*-manifest.txt 00:00:35.280 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:35.280 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:35.280 ++ uname 00:00:35.280 + [[ Linux == \L\i\n\u\x ]] 00:00:35.280 + sudo dmesg -T 00:00:35.280 + sudo dmesg --clear 00:00:35.280 + dmesg_pid=1123309 00:00:35.280 + sudo dmesg -Tw 00:00:35.280 + [[ Fedora Linux == FreeBSD ]] 00:00:35.280 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:35.280 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:35.280 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:35.280 + [[ -x /usr/src/fio-static/fio ]] 00:00:35.280 + export FIO_BIN=/usr/src/fio-static/fio 00:00:35.280 + FIO_BIN=/usr/src/fio-static/fio 00:00:35.280 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:35.280 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:35.280 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:35.280 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:35.280 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:35.280 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:35.280 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:35.280 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:35.280 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:35.280 Test configuration: 00:00:35.281 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.281 SPDK_TEST_NVMF=1 00:00:35.281 SPDK_TEST_NVME_CLI=1 00:00:35.281 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.281 SPDK_TEST_NVMF_NICS=e810 00:00:35.281 SPDK_TEST_VFIOUSER=1 00:00:35.281 SPDK_RUN_UBSAN=1 00:00:35.281 NET_TYPE=phy 00:00:35.281 RUN_NIGHTLY=0 16:45:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:35.281 16:45:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:35.281 16:45:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:35.281 16:45:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:35.281 16:45:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.281 16:45:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.281 16:45:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.281 16:45:13 -- paths/export.sh@5 -- $ export PATH 00:00:35.281 16:45:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.281 16:45:13 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:35.281 16:45:13 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:35.281 16:45:13 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715784313.XXXXXX 00:00:35.281 16:45:13 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715784313.TSXbwq 00:00:35.281 16:45:13 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:35.281 16:45:13 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:35.281 16:45:13 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:35.281 16:45:13 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:35.281 16:45:13 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:35.281 16:45:13 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:35.281 16:45:13 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:35.281 16:45:13 -- common/autotest_common.sh@10 -- $ set +x 00:00:35.281 16:45:13 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:35.281 16:45:13 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:35.281 16:45:13 -- pm/common@17 -- $ local monitor 00:00:35.281 16:45:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.281 16:45:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.281 16:45:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.281 16:45:13 -- pm/common@21 -- $ date +%s 00:00:35.281 16:45:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.281 16:45:13 -- pm/common@21 -- $ date +%s 00:00:35.281 16:45:13 -- pm/common@25 -- $ sleep 1 00:00:35.281 16:45:13 -- pm/common@21 -- $ date +%s 00:00:35.281 16:45:13 -- pm/common@21 -- $ date +%s 00:00:35.281 16:45:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715784313 00:00:35.281 16:45:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715784313 00:00:35.281 16:45:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715784313 00:00:35.281 16:45:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715784313 00:00:35.281 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715784313_collect-vmstat.pm.log 00:00:35.281 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715784313_collect-cpu-load.pm.log 00:00:35.281 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715784313_collect-cpu-temp.pm.log 00:00:35.281 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715784313_collect-bmc-pm.bmc.pm.log 00:00:36.222 16:45:14 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:36.222 16:45:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:36.222 16:45:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:36.222 16:45:14 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.222 16:45:14 -- spdk/autobuild.sh@16 -- $ date -u 00:00:36.222 Wed May 15 02:45:14 PM UTC 2024 00:00:36.222 16:45:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:36.222 v24.05-pre-665-gc7a82f3a8 00:00:36.222 16:45:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:36.222 16:45:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:36.222 16:45:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:36.222 16:45:14 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:36.222 16:45:14 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:36.222 16:45:14 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.222 ************************************ 00:00:36.222 START TEST ubsan 00:00:36.222 ************************************ 00:00:36.222 16:45:15 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:36.222 using ubsan 00:00:36.222 00:00:36.222 real 0m0.000s 00:00:36.222 user 0m0.000s 00:00:36.222 sys 0m0.000s 00:00:36.222 16:45:15 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:36.222 16:45:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:36.222 ************************************ 00:00:36.222 END TEST ubsan 00:00:36.222 ************************************ 00:00:36.483 16:45:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:36.483 16:45:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:36.483 16:45:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:36.483 16:45:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:36.483 16:45:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:36.483 16:45:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:36.483 16:45:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:36.483 16:45:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:36.483 16:45:15 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:36.483 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:36.483 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:37.054 Using 'verbs' RDMA provider 00:00:52.540 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:04.813 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:04.813 Creating mk/config.mk...done. 00:01:04.813 Creating mk/cc.flags.mk...done. 00:01:04.813 Type 'make' to build. 00:01:04.813 16:45:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:04.813 16:45:42 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:04.813 16:45:42 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:04.813 16:45:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.813 ************************************ 00:01:04.813 START TEST make 00:01:04.813 ************************************ 00:01:04.813 16:45:42 make -- common/autotest_common.sh@1121 -- $ make -j144 00:01:04.813 make[1]: Nothing to be done for 'all'. 00:01:05.758 The Meson build system 00:01:05.758 Version: 1.3.1 00:01:05.758 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:05.758 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:05.758 Build type: native build 00:01:05.758 Project name: libvfio-user 00:01:05.758 Project version: 0.0.1 00:01:05.758 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:05.758 C linker for the host machine: cc ld.bfd 2.39-16 00:01:05.758 Host machine cpu family: x86_64 00:01:05.758 Host machine cpu: x86_64 00:01:05.758 Run-time dependency threads found: YES 00:01:05.758 Library dl found: YES 00:01:05.758 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:05.758 Run-time dependency json-c found: YES 0.17 00:01:05.758 Run-time dependency cmocka found: YES 1.1.7 00:01:05.758 Program pytest-3 found: NO 00:01:05.758 Program flake8 found: NO 00:01:05.758 Program misspell-fixer found: NO 00:01:05.758 Program restructuredtext-lint found: NO 00:01:05.758 Program valgrind found: YES (/usr/bin/valgrind) 00:01:05.758 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:05.758 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:05.758 Compiler for C supports arguments -Wwrite-strings: YES 00:01:05.758 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:05.758 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:05.758 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:05.758 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:05.758 Build targets in project: 8 00:01:05.758 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:05.758 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:05.758 00:01:05.758 libvfio-user 0.0.1 00:01:05.758 00:01:05.758 User defined options 00:01:05.758 buildtype : debug 00:01:05.758 default_library: shared 00:01:05.758 libdir : /usr/local/lib 00:01:05.758 00:01:05.758 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:06.017 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:06.276 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:06.276 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:06.276 [3/37] Compiling C object samples/null.p/null.c.o 00:01:06.276 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:06.276 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:06.276 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:06.276 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:06.276 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:06.276 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:06.276 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:06.276 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:06.276 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:06.276 [13/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:06.276 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:06.276 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:06.276 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:06.276 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:06.276 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:06.276 [19/37] Compiling C object samples/server.p/server.c.o 00:01:06.276 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:06.276 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:06.276 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:06.276 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:06.276 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:06.276 [25/37] Compiling C object samples/client.p/client.c.o 00:01:06.276 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:06.276 [27/37] Linking target samples/client 00:01:06.276 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:06.276 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:06.536 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:06.536 [31/37] Linking target test/unit_tests 00:01:06.536 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:06.536 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:06.536 [34/37] Linking target samples/null 00:01:06.536 [35/37] Linking target samples/server 00:01:06.536 [36/37] Linking target samples/gpio-pci-idio-16 00:01:06.536 [37/37] Linking target samples/lspci 00:01:06.536 INFO: autodetecting backend as ninja 00:01:06.536 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:06.536 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:07.108 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:07.108 ninja: no work to do. 00:01:12.398 The Meson build system 00:01:12.398 Version: 1.3.1 00:01:12.398 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:12.398 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:12.398 Build type: native build 00:01:12.398 Program cat found: YES (/usr/bin/cat) 00:01:12.398 Project name: DPDK 00:01:12.398 Project version: 23.11.0 00:01:12.398 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:12.398 C linker for the host machine: cc ld.bfd 2.39-16 00:01:12.398 Host machine cpu family: x86_64 00:01:12.398 Host machine cpu: x86_64 00:01:12.398 Message: ## Building in Developer Mode ## 00:01:12.398 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:12.398 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:12.398 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:12.398 Program python3 found: YES (/usr/bin/python3) 00:01:12.398 Program cat found: YES (/usr/bin/cat) 00:01:12.398 Compiler for C supports arguments -march=native: YES 00:01:12.398 Checking for size of "void *" : 8 00:01:12.398 Checking for size of "void *" : 8 (cached) 00:01:12.398 Library m found: YES 00:01:12.398 Library numa found: YES 00:01:12.398 Has header "numaif.h" : YES 00:01:12.398 Library fdt found: NO 00:01:12.398 Library execinfo found: NO 00:01:12.398 Has header "execinfo.h" : YES 00:01:12.398 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:12.398 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:12.398 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:12.398 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:12.398 Run-time dependency openssl found: YES 3.0.9 00:01:12.398 Run-time dependency libpcap found: YES 1.10.4 00:01:12.398 Has header "pcap.h" with dependency libpcap: YES 00:01:12.398 Compiler for C supports arguments -Wcast-qual: YES 00:01:12.398 Compiler for C supports arguments -Wdeprecated: YES 00:01:12.398 Compiler for C supports arguments -Wformat: YES 00:01:12.398 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:12.398 Compiler for C supports arguments -Wformat-security: NO 00:01:12.398 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:12.398 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:12.398 Compiler for C supports arguments -Wnested-externs: YES 00:01:12.398 Compiler for C supports arguments -Wold-style-definition: YES 00:01:12.398 Compiler for C supports arguments -Wpointer-arith: YES 00:01:12.398 Compiler for C supports arguments -Wsign-compare: YES 00:01:12.398 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:12.398 Compiler for C supports arguments -Wundef: YES 00:01:12.398 Compiler for C supports arguments -Wwrite-strings: YES 00:01:12.398 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:12.398 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:12.398 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:12.398 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:12.398 Program objdump found: YES (/usr/bin/objdump) 00:01:12.398 Compiler for C supports arguments -mavx512f: YES 00:01:12.398 Checking if "AVX512 checking" compiles: YES 00:01:12.398 Fetching value of define "__SSE4_2__" : 1 00:01:12.398 Fetching value of define "__AES__" : 1 00:01:12.398 Fetching value of define "__AVX__" : 1 00:01:12.398 Fetching value of define "__AVX2__" : 1 00:01:12.398 Fetching value of define "__AVX512BW__" : 1 00:01:12.398 Fetching value of define "__AVX512CD__" : 1 00:01:12.398 Fetching value of define "__AVX512DQ__" : 1 00:01:12.398 Fetching value of define "__AVX512F__" : 1 00:01:12.398 Fetching value of define "__AVX512VL__" : 1 00:01:12.398 Fetching value of define "__PCLMUL__" : 1 00:01:12.398 Fetching value of define "__RDRND__" : 1 00:01:12.398 Fetching value of define "__RDSEED__" : 1 00:01:12.398 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:12.398 Fetching value of define "__znver1__" : (undefined) 00:01:12.398 Fetching value of define "__znver2__" : (undefined) 00:01:12.398 Fetching value of define "__znver3__" : (undefined) 00:01:12.398 Fetching value of define "__znver4__" : (undefined) 00:01:12.398 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:12.398 Message: lib/log: Defining dependency "log" 00:01:12.398 Message: lib/kvargs: Defining dependency "kvargs" 00:01:12.398 Message: lib/telemetry: Defining dependency "telemetry" 00:01:12.398 Checking for function "getentropy" : NO 00:01:12.398 Message: lib/eal: Defining dependency "eal" 00:01:12.398 Message: lib/ring: Defining dependency "ring" 00:01:12.398 Message: lib/rcu: Defining dependency "rcu" 00:01:12.398 Message: lib/mempool: Defining dependency "mempool" 00:01:12.398 Message: lib/mbuf: Defining dependency "mbuf" 00:01:12.398 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:12.398 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:12.398 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:12.398 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:12.398 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:12.398 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:12.398 Compiler for C supports arguments -mpclmul: YES 00:01:12.398 Compiler for C supports arguments -maes: YES 00:01:12.398 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:12.398 Compiler for C supports arguments -mavx512bw: YES 00:01:12.398 Compiler for C supports arguments -mavx512dq: YES 00:01:12.398 Compiler for C supports arguments -mavx512vl: YES 00:01:12.398 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:12.398 Compiler for C supports arguments -mavx2: YES 00:01:12.398 Compiler for C supports arguments -mavx: YES 00:01:12.398 Message: lib/net: Defining dependency "net" 00:01:12.398 Message: lib/meter: Defining dependency "meter" 00:01:12.398 Message: lib/ethdev: Defining dependency "ethdev" 00:01:12.398 Message: lib/pci: Defining dependency "pci" 00:01:12.398 Message: lib/cmdline: Defining dependency "cmdline" 00:01:12.398 Message: lib/hash: Defining dependency "hash" 00:01:12.398 Message: lib/timer: Defining dependency "timer" 00:01:12.398 Message: lib/compressdev: Defining dependency "compressdev" 00:01:12.398 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:12.398 Message: lib/dmadev: Defining dependency "dmadev" 00:01:12.398 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:12.398 Message: lib/power: Defining dependency "power" 00:01:12.398 Message: lib/reorder: Defining dependency "reorder" 00:01:12.398 Message: lib/security: Defining dependency "security" 00:01:12.398 Has header "linux/userfaultfd.h" : YES 00:01:12.398 Has header "linux/vduse.h" : YES 00:01:12.398 Message: lib/vhost: Defining dependency "vhost" 00:01:12.398 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:12.398 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:12.398 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:12.398 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:12.398 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:12.398 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:12.398 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:12.399 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:12.399 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:12.399 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:12.399 Program doxygen found: YES (/usr/bin/doxygen) 00:01:12.399 Configuring doxy-api-html.conf using configuration 00:01:12.399 Configuring doxy-api-man.conf using configuration 00:01:12.399 Program mandb found: YES (/usr/bin/mandb) 00:01:12.399 Program sphinx-build found: NO 00:01:12.399 Configuring rte_build_config.h using configuration 00:01:12.399 Message: 00:01:12.399 ================= 00:01:12.399 Applications Enabled 00:01:12.399 ================= 00:01:12.399 00:01:12.399 apps: 00:01:12.399 00:01:12.399 00:01:12.399 Message: 00:01:12.399 ================= 00:01:12.399 Libraries Enabled 00:01:12.399 ================= 00:01:12.399 00:01:12.399 libs: 00:01:12.399 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:12.399 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:12.399 cryptodev, dmadev, power, reorder, security, vhost, 00:01:12.399 00:01:12.399 Message: 00:01:12.399 =============== 00:01:12.399 Drivers Enabled 00:01:12.399 =============== 00:01:12.399 00:01:12.399 common: 00:01:12.399 00:01:12.399 bus: 00:01:12.399 pci, vdev, 00:01:12.399 mempool: 00:01:12.399 ring, 00:01:12.399 dma: 00:01:12.399 00:01:12.399 net: 00:01:12.399 00:01:12.399 crypto: 00:01:12.399 00:01:12.399 compress: 00:01:12.399 00:01:12.399 vdpa: 00:01:12.399 00:01:12.399 00:01:12.399 Message: 00:01:12.399 ================= 00:01:12.399 Content Skipped 00:01:12.399 ================= 00:01:12.399 00:01:12.399 apps: 00:01:12.399 dumpcap: explicitly disabled via build config 00:01:12.399 graph: explicitly disabled via build config 00:01:12.399 pdump: explicitly disabled via build config 00:01:12.399 proc-info: explicitly disabled via build config 00:01:12.399 test-acl: explicitly disabled via build config 00:01:12.399 test-bbdev: explicitly disabled via build config 00:01:12.399 test-cmdline: explicitly disabled via build config 00:01:12.399 test-compress-perf: explicitly disabled via build config 00:01:12.399 test-crypto-perf: explicitly disabled via build config 00:01:12.399 test-dma-perf: explicitly disabled via build config 00:01:12.399 test-eventdev: explicitly disabled via build config 00:01:12.399 test-fib: explicitly disabled via build config 00:01:12.399 test-flow-perf: explicitly disabled via build config 00:01:12.399 test-gpudev: explicitly disabled via build config 00:01:12.399 test-mldev: explicitly disabled via build config 00:01:12.399 test-pipeline: explicitly disabled via build config 00:01:12.399 test-pmd: explicitly disabled via build config 00:01:12.399 test-regex: explicitly disabled via build config 00:01:12.399 test-sad: explicitly disabled via build config 00:01:12.399 test-security-perf: explicitly disabled via build config 00:01:12.399 00:01:12.399 libs: 00:01:12.399 metrics: explicitly disabled via build config 00:01:12.399 acl: explicitly disabled via build config 00:01:12.399 bbdev: explicitly disabled via build config 00:01:12.399 bitratestats: explicitly disabled via build config 00:01:12.399 bpf: explicitly disabled via build config 00:01:12.399 cfgfile: explicitly disabled via build config 00:01:12.399 distributor: explicitly disabled via build config 00:01:12.399 efd: explicitly disabled via build config 00:01:12.399 eventdev: explicitly disabled via build config 00:01:12.399 dispatcher: explicitly disabled via build config 00:01:12.399 gpudev: explicitly disabled via build config 00:01:12.399 gro: explicitly disabled via build config 00:01:12.399 gso: explicitly disabled via build config 00:01:12.399 ip_frag: explicitly disabled via build config 00:01:12.399 jobstats: explicitly disabled via build config 00:01:12.399 latencystats: explicitly disabled via build config 00:01:12.399 lpm: explicitly disabled via build config 00:01:12.399 member: explicitly disabled via build config 00:01:12.399 pcapng: explicitly disabled via build config 00:01:12.399 rawdev: explicitly disabled via build config 00:01:12.399 regexdev: explicitly disabled via build config 00:01:12.399 mldev: explicitly disabled via build config 00:01:12.399 rib: explicitly disabled via build config 00:01:12.399 sched: explicitly disabled via build config 00:01:12.399 stack: explicitly disabled via build config 00:01:12.399 ipsec: explicitly disabled via build config 00:01:12.399 pdcp: explicitly disabled via build config 00:01:12.399 fib: explicitly disabled via build config 00:01:12.399 port: explicitly disabled via build config 00:01:12.399 pdump: explicitly disabled via build config 00:01:12.399 table: explicitly disabled via build config 00:01:12.399 pipeline: explicitly disabled via build config 00:01:12.399 graph: explicitly disabled via build config 00:01:12.399 node: explicitly disabled via build config 00:01:12.399 00:01:12.399 drivers: 00:01:12.399 common/cpt: not in enabled drivers build config 00:01:12.399 common/dpaax: not in enabled drivers build config 00:01:12.399 common/iavf: not in enabled drivers build config 00:01:12.399 common/idpf: not in enabled drivers build config 00:01:12.399 common/mvep: not in enabled drivers build config 00:01:12.399 common/octeontx: not in enabled drivers build config 00:01:12.399 bus/auxiliary: not in enabled drivers build config 00:01:12.399 bus/cdx: not in enabled drivers build config 00:01:12.399 bus/dpaa: not in enabled drivers build config 00:01:12.399 bus/fslmc: not in enabled drivers build config 00:01:12.399 bus/ifpga: not in enabled drivers build config 00:01:12.399 bus/platform: not in enabled drivers build config 00:01:12.399 bus/vmbus: not in enabled drivers build config 00:01:12.399 common/cnxk: not in enabled drivers build config 00:01:12.399 common/mlx5: not in enabled drivers build config 00:01:12.399 common/nfp: not in enabled drivers build config 00:01:12.399 common/qat: not in enabled drivers build config 00:01:12.399 common/sfc_efx: not in enabled drivers build config 00:01:12.399 mempool/bucket: not in enabled drivers build config 00:01:12.399 mempool/cnxk: not in enabled drivers build config 00:01:12.399 mempool/dpaa: not in enabled drivers build config 00:01:12.399 mempool/dpaa2: not in enabled drivers build config 00:01:12.399 mempool/octeontx: not in enabled drivers build config 00:01:12.399 mempool/stack: not in enabled drivers build config 00:01:12.399 dma/cnxk: not in enabled drivers build config 00:01:12.399 dma/dpaa: not in enabled drivers build config 00:01:12.399 dma/dpaa2: not in enabled drivers build config 00:01:12.399 dma/hisilicon: not in enabled drivers build config 00:01:12.399 dma/idxd: not in enabled drivers build config 00:01:12.399 dma/ioat: not in enabled drivers build config 00:01:12.399 dma/skeleton: not in enabled drivers build config 00:01:12.399 net/af_packet: not in enabled drivers build config 00:01:12.399 net/af_xdp: not in enabled drivers build config 00:01:12.399 net/ark: not in enabled drivers build config 00:01:12.399 net/atlantic: not in enabled drivers build config 00:01:12.400 net/avp: not in enabled drivers build config 00:01:12.400 net/axgbe: not in enabled drivers build config 00:01:12.400 net/bnx2x: not in enabled drivers build config 00:01:12.400 net/bnxt: not in enabled drivers build config 00:01:12.400 net/bonding: not in enabled drivers build config 00:01:12.400 net/cnxk: not in enabled drivers build config 00:01:12.400 net/cpfl: not in enabled drivers build config 00:01:12.400 net/cxgbe: not in enabled drivers build config 00:01:12.400 net/dpaa: not in enabled drivers build config 00:01:12.400 net/dpaa2: not in enabled drivers build config 00:01:12.400 net/e1000: not in enabled drivers build config 00:01:12.400 net/ena: not in enabled drivers build config 00:01:12.400 net/enetc: not in enabled drivers build config 00:01:12.400 net/enetfec: not in enabled drivers build config 00:01:12.400 net/enic: not in enabled drivers build config 00:01:12.400 net/failsafe: not in enabled drivers build config 00:01:12.400 net/fm10k: not in enabled drivers build config 00:01:12.400 net/gve: not in enabled drivers build config 00:01:12.400 net/hinic: not in enabled drivers build config 00:01:12.400 net/hns3: not in enabled drivers build config 00:01:12.400 net/i40e: not in enabled drivers build config 00:01:12.400 net/iavf: not in enabled drivers build config 00:01:12.400 net/ice: not in enabled drivers build config 00:01:12.400 net/idpf: not in enabled drivers build config 00:01:12.400 net/igc: not in enabled drivers build config 00:01:12.400 net/ionic: not in enabled drivers build config 00:01:12.400 net/ipn3ke: not in enabled drivers build config 00:01:12.400 net/ixgbe: not in enabled drivers build config 00:01:12.400 net/mana: not in enabled drivers build config 00:01:12.400 net/memif: not in enabled drivers build config 00:01:12.400 net/mlx4: not in enabled drivers build config 00:01:12.400 net/mlx5: not in enabled drivers build config 00:01:12.400 net/mvneta: not in enabled drivers build config 00:01:12.400 net/mvpp2: not in enabled drivers build config 00:01:12.400 net/netvsc: not in enabled drivers build config 00:01:12.400 net/nfb: not in enabled drivers build config 00:01:12.400 net/nfp: not in enabled drivers build config 00:01:12.400 net/ngbe: not in enabled drivers build config 00:01:12.400 net/null: not in enabled drivers build config 00:01:12.400 net/octeontx: not in enabled drivers build config 00:01:12.400 net/octeon_ep: not in enabled drivers build config 00:01:12.400 net/pcap: not in enabled drivers build config 00:01:12.400 net/pfe: not in enabled drivers build config 00:01:12.400 net/qede: not in enabled drivers build config 00:01:12.400 net/ring: not in enabled drivers build config 00:01:12.400 net/sfc: not in enabled drivers build config 00:01:12.400 net/softnic: not in enabled drivers build config 00:01:12.400 net/tap: not in enabled drivers build config 00:01:12.400 net/thunderx: not in enabled drivers build config 00:01:12.400 net/txgbe: not in enabled drivers build config 00:01:12.400 net/vdev_netvsc: not in enabled drivers build config 00:01:12.400 net/vhost: not in enabled drivers build config 00:01:12.400 net/virtio: not in enabled drivers build config 00:01:12.400 net/vmxnet3: not in enabled drivers build config 00:01:12.400 raw/*: missing internal dependency, "rawdev" 00:01:12.400 crypto/armv8: not in enabled drivers build config 00:01:12.400 crypto/bcmfs: not in enabled drivers build config 00:01:12.400 crypto/caam_jr: not in enabled drivers build config 00:01:12.400 crypto/ccp: not in enabled drivers build config 00:01:12.400 crypto/cnxk: not in enabled drivers build config 00:01:12.400 crypto/dpaa_sec: not in enabled drivers build config 00:01:12.400 crypto/dpaa2_sec: not in enabled drivers build config 00:01:12.400 crypto/ipsec_mb: not in enabled drivers build config 00:01:12.400 crypto/mlx5: not in enabled drivers build config 00:01:12.400 crypto/mvsam: not in enabled drivers build config 00:01:12.400 crypto/nitrox: not in enabled drivers build config 00:01:12.400 crypto/null: not in enabled drivers build config 00:01:12.400 crypto/octeontx: not in enabled drivers build config 00:01:12.400 crypto/openssl: not in enabled drivers build config 00:01:12.400 crypto/scheduler: not in enabled drivers build config 00:01:12.400 crypto/uadk: not in enabled drivers build config 00:01:12.400 crypto/virtio: not in enabled drivers build config 00:01:12.400 compress/isal: not in enabled drivers build config 00:01:12.400 compress/mlx5: not in enabled drivers build config 00:01:12.400 compress/octeontx: not in enabled drivers build config 00:01:12.400 compress/zlib: not in enabled drivers build config 00:01:12.400 regex/*: missing internal dependency, "regexdev" 00:01:12.400 ml/*: missing internal dependency, "mldev" 00:01:12.400 vdpa/ifc: not in enabled drivers build config 00:01:12.400 vdpa/mlx5: not in enabled drivers build config 00:01:12.400 vdpa/nfp: not in enabled drivers build config 00:01:12.400 vdpa/sfc: not in enabled drivers build config 00:01:12.400 event/*: missing internal dependency, "eventdev" 00:01:12.400 baseband/*: missing internal dependency, "bbdev" 00:01:12.400 gpu/*: missing internal dependency, "gpudev" 00:01:12.400 00:01:12.400 00:01:12.400 Build targets in project: 84 00:01:12.400 00:01:12.400 DPDK 23.11.0 00:01:12.400 00:01:12.400 User defined options 00:01:12.400 buildtype : debug 00:01:12.400 default_library : shared 00:01:12.400 libdir : lib 00:01:12.400 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:12.400 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:12.400 c_link_args : 00:01:12.400 cpu_instruction_set: native 00:01:12.400 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:12.400 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:12.400 enable_docs : false 00:01:12.400 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:12.400 enable_kmods : false 00:01:12.400 tests : false 00:01:12.400 00:01:12.400 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:12.661 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:12.928 [1/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:12.928 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:12.928 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:12.928 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:12.928 [5/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:12.928 [6/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:12.928 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:12.928 [8/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:12.928 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:12.928 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:12.928 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:12.928 [12/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:12.928 [13/264] Linking static target lib/librte_kvargs.a 00:01:12.928 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:12.928 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:12.928 [16/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:12.928 [17/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:12.928 [18/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:12.928 [19/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:12.928 [20/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:12.928 [21/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:12.928 [22/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:12.928 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:12.928 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:12.928 [25/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:13.186 [26/264] Linking static target lib/librte_pci.a 00:01:13.186 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:13.186 [28/264] Linking static target lib/librte_log.a 00:01:13.186 [29/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:13.186 [30/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:13.186 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:13.186 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:13.186 [33/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:13.186 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:13.186 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:13.186 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:13.186 [37/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:13.186 [38/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:13.186 [39/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:13.186 [40/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:13.186 [41/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:13.186 [42/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:13.186 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:13.186 [44/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:13.445 [45/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.445 [46/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:13.445 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:13.445 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:13.445 [49/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:13.445 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:13.445 [51/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.445 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:13.445 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:13.445 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:13.445 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:13.445 [56/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:13.445 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:13.445 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:13.445 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:13.445 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:13.445 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:13.445 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:13.445 [63/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:13.445 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:13.445 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:13.445 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:13.445 [67/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:13.445 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:13.445 [69/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:13.445 [70/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:13.445 [71/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:13.445 [72/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:13.445 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:13.445 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:13.445 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:13.445 [76/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:13.445 [77/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:13.445 [78/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:13.445 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:13.445 [80/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:13.445 [81/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:13.445 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:13.445 [83/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:13.445 [84/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:13.445 [85/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:13.445 [86/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:13.445 [87/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:13.445 [88/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:13.445 [89/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:13.445 [90/264] Linking static target lib/librte_telemetry.a 00:01:13.445 [91/264] Linking static target lib/librte_ring.a 00:01:13.445 [92/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:13.445 [93/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:13.445 [94/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:13.445 [95/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:13.445 [96/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:13.445 [97/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:13.445 [98/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:13.445 [99/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:13.445 [100/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:13.445 [101/264] Linking static target lib/librte_meter.a 00:01:13.445 [102/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:13.445 [103/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:13.445 [104/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:13.445 [105/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:13.445 [106/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:13.445 [107/264] Linking static target lib/librte_cmdline.a 00:01:13.445 [108/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:13.445 [109/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:13.445 [110/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:13.445 [111/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:13.445 [112/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:13.445 [113/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:13.445 [114/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:13.704 [115/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:13.704 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:13.704 [117/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:13.704 [118/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:13.704 [119/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:13.704 [120/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:13.704 [121/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:13.704 [122/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:13.704 [123/264] Linking static target lib/librte_security.a 00:01:13.704 [124/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:13.704 [125/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:13.704 [126/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:13.704 [127/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:13.704 [128/264] Linking static target lib/librte_rcu.a 00:01:13.704 [129/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:13.704 [130/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:13.704 [131/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:13.704 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:13.704 [133/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:13.704 [134/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:13.704 [135/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:13.704 [136/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:13.704 [137/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.704 [138/264] Linking static target lib/librte_timer.a 00:01:13.705 [139/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:13.705 [140/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:13.705 [141/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:13.705 [142/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:13.705 [143/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:13.705 [144/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:13.705 [145/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:13.705 [146/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:13.705 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:13.705 [148/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:13.705 [149/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:13.705 [150/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:13.705 [151/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:13.705 [152/264] Linking static target lib/librte_dmadev.a 00:01:13.705 [153/264] Linking static target lib/librte_reorder.a 00:01:13.705 [154/264] Linking target lib/librte_log.so.24.0 00:01:13.705 [155/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:13.705 [156/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:13.705 [157/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:13.705 [158/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:13.705 [159/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:13.705 [160/264] Linking static target lib/librte_power.a 00:01:13.705 [161/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:13.705 [162/264] Linking static target lib/librte_net.a 00:01:13.705 [163/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:13.705 [164/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:13.705 [165/264] Linking static target lib/librte_compressdev.a 00:01:13.705 [166/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:13.705 [167/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:13.705 [168/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:13.705 [169/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:13.705 [170/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:13.705 [171/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:13.705 [172/264] Linking static target lib/librte_mempool.a 00:01:13.705 [173/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:13.705 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:13.705 [175/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:13.705 [176/264] Linking static target lib/librte_eal.a 00:01:13.705 [177/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:13.705 [178/264] Linking static target lib/librte_mbuf.a 00:01:13.705 [179/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.705 [180/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:13.705 [181/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:13.965 [182/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.965 [183/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:13.965 [184/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.965 [185/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:13.965 [186/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.965 [187/264] Linking static target drivers/librte_bus_vdev.a 00:01:13.965 [188/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:13.965 [189/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.965 [190/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.965 [191/264] Linking static target drivers/librte_bus_pci.a 00:01:13.965 [192/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:13.965 [193/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:13.965 [194/264] Linking target lib/librte_kvargs.so.24.0 00:01:13.965 [195/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:13.965 [196/264] Linking static target lib/librte_hash.a 00:01:13.965 [197/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.965 [198/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.965 [199/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.965 [200/264] Linking static target drivers/librte_mempool_ring.a 00:01:13.965 [201/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:13.965 [202/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:13.965 [203/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.223 [204/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.223 [205/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:14.223 [206/264] Linking static target lib/librte_cryptodev.a 00:01:14.223 [207/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.223 [208/264] Linking target lib/librte_telemetry.so.24.0 00:01:14.224 [209/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.224 [210/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:14.224 [211/264] Linking static target lib/librte_ethdev.a 00:01:14.224 [212/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.224 [213/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.224 [214/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.224 [215/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:14.483 [216/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.483 [217/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:14.743 [218/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.743 [219/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.743 [220/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.743 [221/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.743 [222/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.743 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.314 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:15.314 [225/264] Linking static target lib/librte_vhost.a 00:01:16.258 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.645 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.939 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.486 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.486 [230/264] Linking target lib/librte_eal.so.24.0 00:01:25.486 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:25.486 [232/264] Linking target lib/librte_ring.so.24.0 00:01:25.486 [233/264] Linking target lib/librte_meter.so.24.0 00:01:25.486 [234/264] Linking target lib/librte_timer.so.24.0 00:01:25.486 [235/264] Linking target lib/librte_dmadev.so.24.0 00:01:25.486 [236/264] Linking target lib/librte_pci.so.24.0 00:01:25.486 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:25.486 [238/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:25.486 [239/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:25.486 [240/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:25.486 [241/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:25.486 [242/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:25.486 [243/264] Linking target lib/librte_mempool.so.24.0 00:01:25.486 [244/264] Linking target lib/librte_rcu.so.24.0 00:01:25.486 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:25.753 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:25.753 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:25.753 [248/264] Linking target lib/librte_mbuf.so.24.0 00:01:25.753 [249/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:26.013 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:26.013 [251/264] Linking target lib/librte_net.so.24.0 00:01:26.013 [252/264] Linking target lib/librte_reorder.so.24.0 00:01:26.013 [253/264] Linking target lib/librte_compressdev.so.24.0 00:01:26.013 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:01:26.013 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:26.013 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:26.274 [257/264] Linking target lib/librte_hash.so.24.0 00:01:26.274 [258/264] Linking target lib/librte_ethdev.so.24.0 00:01:26.274 [259/264] Linking target lib/librte_cmdline.so.24.0 00:01:26.274 [260/264] Linking target lib/librte_security.so.24.0 00:01:26.274 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:26.274 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:26.274 [263/264] Linking target lib/librte_power.so.24.0 00:01:26.274 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:26.533 INFO: autodetecting backend as ninja 00:01:26.533 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:27.477 CC lib/ut_mock/mock.o 00:01:27.477 CC lib/ut/ut.o 00:01:27.477 CC lib/log/log.o 00:01:27.477 CC lib/log/log_flags.o 00:01:27.477 CC lib/log/log_deprecated.o 00:01:27.477 LIB libspdk_ut_mock.a 00:01:27.738 LIB libspdk_ut.a 00:01:27.738 LIB libspdk_log.a 00:01:27.738 SO libspdk_ut_mock.so.6.0 00:01:27.738 SO libspdk_ut.so.2.0 00:01:27.738 SO libspdk_log.so.7.0 00:01:27.738 SYMLINK libspdk_ut_mock.so 00:01:27.738 SYMLINK libspdk_ut.so 00:01:27.738 SYMLINK libspdk_log.so 00:01:27.999 CC lib/util/base64.o 00:01:27.999 CC lib/util/bit_array.o 00:01:27.999 CC lib/util/cpuset.o 00:01:27.999 CXX lib/trace_parser/trace.o 00:01:27.999 CC lib/util/crc16.o 00:01:27.999 CC lib/util/crc32.o 00:01:27.999 CC lib/util/crc32c.o 00:01:27.999 CC lib/dma/dma.o 00:01:27.999 CC lib/util/crc32_ieee.o 00:01:27.999 CC lib/util/crc64.o 00:01:27.999 CC lib/util/dif.o 00:01:27.999 CC lib/util/fd.o 00:01:27.999 CC lib/util/file.o 00:01:27.999 CC lib/util/hexlify.o 00:01:27.999 CC lib/util/iov.o 00:01:27.999 CC lib/util/math.o 00:01:27.999 CC lib/util/pipe.o 00:01:27.999 CC lib/util/strerror_tls.o 00:01:27.999 CC lib/util/string.o 00:01:27.999 CC lib/util/fd_group.o 00:01:27.999 CC lib/ioat/ioat.o 00:01:27.999 CC lib/util/uuid.o 00:01:27.999 CC lib/util/xor.o 00:01:27.999 CC lib/util/zipf.o 00:01:28.259 CC lib/vfio_user/host/vfio_user_pci.o 00:01:28.259 CC lib/vfio_user/host/vfio_user.o 00:01:28.259 LIB libspdk_dma.a 00:01:28.259 SO libspdk_dma.so.4.0 00:01:28.259 LIB libspdk_ioat.a 00:01:28.520 SYMLINK libspdk_dma.so 00:01:28.520 SO libspdk_ioat.so.7.0 00:01:28.520 SYMLINK libspdk_ioat.so 00:01:28.520 LIB libspdk_vfio_user.a 00:01:28.520 LIB libspdk_util.a 00:01:28.520 SO libspdk_vfio_user.so.5.0 00:01:28.520 SO libspdk_util.so.9.0 00:01:28.520 SYMLINK libspdk_vfio_user.so 00:01:28.782 SYMLINK libspdk_util.so 00:01:28.782 LIB libspdk_trace_parser.a 00:01:29.043 SO libspdk_trace_parser.so.5.0 00:01:29.043 SYMLINK libspdk_trace_parser.so 00:01:29.043 CC lib/json/json_parse.o 00:01:29.043 CC lib/json/json_util.o 00:01:29.043 CC lib/json/json_write.o 00:01:29.043 CC lib/conf/conf.o 00:01:29.043 CC lib/rdma/common.o 00:01:29.043 CC lib/rdma/rdma_verbs.o 00:01:29.043 CC lib/vmd/vmd.o 00:01:29.043 CC lib/vmd/led.o 00:01:29.043 CC lib/idxd/idxd.o 00:01:29.043 CC lib/idxd/idxd_user.o 00:01:29.043 CC lib/env_dpdk/env.o 00:01:29.043 CC lib/env_dpdk/memory.o 00:01:29.043 CC lib/env_dpdk/pci.o 00:01:29.043 CC lib/env_dpdk/init.o 00:01:29.043 CC lib/env_dpdk/threads.o 00:01:29.043 CC lib/env_dpdk/pci_ioat.o 00:01:29.043 CC lib/env_dpdk/pci_virtio.o 00:01:29.043 CC lib/env_dpdk/pci_vmd.o 00:01:29.043 CC lib/env_dpdk/pci_idxd.o 00:01:29.043 CC lib/env_dpdk/pci_event.o 00:01:29.043 CC lib/env_dpdk/sigbus_handler.o 00:01:29.043 CC lib/env_dpdk/pci_dpdk.o 00:01:29.043 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:29.043 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:29.304 LIB libspdk_conf.a 00:01:29.304 SO libspdk_conf.so.6.0 00:01:29.304 LIB libspdk_rdma.a 00:01:29.304 LIB libspdk_json.a 00:01:29.564 SO libspdk_rdma.so.6.0 00:01:29.564 SYMLINK libspdk_conf.so 00:01:29.564 SO libspdk_json.so.6.0 00:01:29.564 SYMLINK libspdk_rdma.so 00:01:29.564 SYMLINK libspdk_json.so 00:01:29.564 LIB libspdk_idxd.a 00:01:29.564 SO libspdk_idxd.so.12.0 00:01:29.564 LIB libspdk_vmd.a 00:01:29.824 SYMLINK libspdk_idxd.so 00:01:29.824 SO libspdk_vmd.so.6.0 00:01:29.824 SYMLINK libspdk_vmd.so 00:01:29.824 CC lib/jsonrpc/jsonrpc_server.o 00:01:29.824 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:29.824 CC lib/jsonrpc/jsonrpc_client.o 00:01:29.824 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:30.085 LIB libspdk_jsonrpc.a 00:01:30.085 SO libspdk_jsonrpc.so.6.0 00:01:30.346 SYMLINK libspdk_jsonrpc.so 00:01:30.346 LIB libspdk_env_dpdk.a 00:01:30.346 SO libspdk_env_dpdk.so.14.0 00:01:30.607 SYMLINK libspdk_env_dpdk.so 00:01:30.608 CC lib/rpc/rpc.o 00:01:30.868 LIB libspdk_rpc.a 00:01:30.868 SO libspdk_rpc.so.6.0 00:01:30.868 SYMLINK libspdk_rpc.so 00:01:31.129 CC lib/notify/notify.o 00:01:31.129 CC lib/notify/notify_rpc.o 00:01:31.129 CC lib/keyring/keyring.o 00:01:31.129 CC lib/keyring/keyring_rpc.o 00:01:31.392 CC lib/trace/trace.o 00:01:31.392 CC lib/trace/trace_flags.o 00:01:31.392 CC lib/trace/trace_rpc.o 00:01:31.392 LIB libspdk_notify.a 00:01:31.392 SO libspdk_notify.so.6.0 00:01:31.392 LIB libspdk_keyring.a 00:01:31.392 LIB libspdk_trace.a 00:01:31.653 SYMLINK libspdk_notify.so 00:01:31.653 SO libspdk_keyring.so.1.0 00:01:31.653 SO libspdk_trace.so.10.0 00:01:31.653 SYMLINK libspdk_keyring.so 00:01:31.653 SYMLINK libspdk_trace.so 00:01:31.914 CC lib/sock/sock.o 00:01:31.914 CC lib/thread/thread.o 00:01:31.914 CC lib/sock/sock_rpc.o 00:01:31.914 CC lib/thread/iobuf.o 00:01:32.488 LIB libspdk_sock.a 00:01:32.488 SO libspdk_sock.so.9.0 00:01:32.488 SYMLINK libspdk_sock.so 00:01:32.791 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:32.791 CC lib/nvme/nvme_ctrlr.o 00:01:32.791 CC lib/nvme/nvme_ns.o 00:01:32.791 CC lib/nvme/nvme_fabric.o 00:01:32.791 CC lib/nvme/nvme_ns_cmd.o 00:01:32.791 CC lib/nvme/nvme_pcie_common.o 00:01:32.791 CC lib/nvme/nvme_pcie.o 00:01:32.791 CC lib/nvme/nvme_qpair.o 00:01:32.791 CC lib/nvme/nvme.o 00:01:32.791 CC lib/nvme/nvme_quirks.o 00:01:32.791 CC lib/nvme/nvme_transport.o 00:01:32.791 CC lib/nvme/nvme_discovery.o 00:01:32.791 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:32.791 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:32.791 CC lib/nvme/nvme_tcp.o 00:01:32.791 CC lib/nvme/nvme_opal.o 00:01:32.791 CC lib/nvme/nvme_io_msg.o 00:01:32.791 CC lib/nvme/nvme_poll_group.o 00:01:32.791 CC lib/nvme/nvme_zns.o 00:01:32.791 CC lib/nvme/nvme_stubs.o 00:01:32.791 CC lib/nvme/nvme_auth.o 00:01:32.791 CC lib/nvme/nvme_cuse.o 00:01:32.791 CC lib/nvme/nvme_vfio_user.o 00:01:32.791 CC lib/nvme/nvme_rdma.o 00:01:33.083 LIB libspdk_thread.a 00:01:33.344 SO libspdk_thread.so.10.0 00:01:33.344 SYMLINK libspdk_thread.so 00:01:33.606 CC lib/virtio/virtio.o 00:01:33.606 CC lib/virtio/virtio_vhost_user.o 00:01:33.606 CC lib/virtio/virtio_vfio_user.o 00:01:33.606 CC lib/virtio/virtio_pci.o 00:01:33.606 CC lib/blob/request.o 00:01:33.606 CC lib/blob/blobstore.o 00:01:33.606 CC lib/init/json_config.o 00:01:33.606 CC lib/init/subsystem.o 00:01:33.606 CC lib/blob/zeroes.o 00:01:33.606 CC lib/init/subsystem_rpc.o 00:01:33.606 CC lib/blob/blob_bs_dev.o 00:01:33.606 CC lib/init/rpc.o 00:01:33.606 CC lib/accel/accel.o 00:01:33.606 CC lib/accel/accel_rpc.o 00:01:33.606 CC lib/accel/accel_sw.o 00:01:33.606 CC lib/vfu_tgt/tgt_endpoint.o 00:01:33.606 CC lib/vfu_tgt/tgt_rpc.o 00:01:33.867 LIB libspdk_init.a 00:01:33.867 LIB libspdk_virtio.a 00:01:33.867 SO libspdk_init.so.5.0 00:01:33.867 SO libspdk_virtio.so.7.0 00:01:33.867 LIB libspdk_vfu_tgt.a 00:01:34.129 SYMLINK libspdk_init.so 00:01:34.129 SO libspdk_vfu_tgt.so.3.0 00:01:34.129 SYMLINK libspdk_virtio.so 00:01:34.129 SYMLINK libspdk_vfu_tgt.so 00:01:34.390 CC lib/event/app.o 00:01:34.390 CC lib/event/reactor.o 00:01:34.390 CC lib/event/log_rpc.o 00:01:34.390 CC lib/event/app_rpc.o 00:01:34.390 CC lib/event/scheduler_static.o 00:01:34.651 LIB libspdk_accel.a 00:01:34.651 SO libspdk_accel.so.15.0 00:01:34.651 LIB libspdk_nvme.a 00:01:34.651 SYMLINK libspdk_accel.so 00:01:34.651 SO libspdk_nvme.so.13.0 00:01:34.651 LIB libspdk_event.a 00:01:34.913 SO libspdk_event.so.13.0 00:01:34.913 SYMLINK libspdk_event.so 00:01:34.913 CC lib/bdev/bdev.o 00:01:34.913 CC lib/bdev/bdev_rpc.o 00:01:34.913 CC lib/bdev/bdev_zone.o 00:01:34.913 CC lib/bdev/part.o 00:01:34.913 CC lib/bdev/scsi_nvme.o 00:01:34.913 SYMLINK libspdk_nvme.so 00:01:36.300 LIB libspdk_blob.a 00:01:36.300 SO libspdk_blob.so.11.0 00:01:36.300 SYMLINK libspdk_blob.so 00:01:36.561 CC lib/lvol/lvol.o 00:01:36.561 CC lib/blobfs/blobfs.o 00:01:36.561 CC lib/blobfs/tree.o 00:01:37.133 LIB libspdk_bdev.a 00:01:37.133 SO libspdk_bdev.so.15.0 00:01:37.394 LIB libspdk_blobfs.a 00:01:37.394 SYMLINK libspdk_bdev.so 00:01:37.394 SO libspdk_blobfs.so.10.0 00:01:37.394 LIB libspdk_lvol.a 00:01:37.394 SO libspdk_lvol.so.10.0 00:01:37.394 SYMLINK libspdk_blobfs.so 00:01:37.394 SYMLINK libspdk_lvol.so 00:01:37.654 CC lib/ublk/ublk.o 00:01:37.654 CC lib/ublk/ublk_rpc.o 00:01:37.654 CC lib/ftl/ftl_core.o 00:01:37.654 CC lib/ftl/ftl_init.o 00:01:37.654 CC lib/ftl/ftl_layout.o 00:01:37.654 CC lib/ftl/ftl_debug.o 00:01:37.654 CC lib/ftl/ftl_io.o 00:01:37.654 CC lib/nvmf/ctrlr.o 00:01:37.654 CC lib/ftl/ftl_sb.o 00:01:37.654 CC lib/ftl/ftl_l2p.o 00:01:37.654 CC lib/scsi/dev.o 00:01:37.654 CC lib/ftl/ftl_l2p_flat.o 00:01:37.654 CC lib/nvmf/ctrlr_discovery.o 00:01:37.654 CC lib/ftl/ftl_nv_cache.o 00:01:37.654 CC lib/scsi/lun.o 00:01:37.654 CC lib/nvmf/ctrlr_bdev.o 00:01:37.654 CC lib/ftl/ftl_band.o 00:01:37.654 CC lib/scsi/port.o 00:01:37.654 CC lib/ftl/ftl_band_ops.o 00:01:37.654 CC lib/nvmf/subsystem.o 00:01:37.654 CC lib/nvmf/nvmf.o 00:01:37.654 CC lib/scsi/scsi.o 00:01:37.654 CC lib/ftl/ftl_writer.o 00:01:37.654 CC lib/nvmf/nvmf_rpc.o 00:01:37.654 CC lib/scsi/scsi_bdev.o 00:01:37.654 CC lib/ftl/ftl_rq.o 00:01:37.654 CC lib/nvmf/transport.o 00:01:37.654 CC lib/scsi/scsi_pr.o 00:01:37.654 CC lib/ftl/ftl_reloc.o 00:01:37.654 CC lib/nvmf/tcp.o 00:01:37.654 CC lib/nbd/nbd.o 00:01:37.654 CC lib/ftl/ftl_p2l.o 00:01:37.654 CC lib/scsi/task.o 00:01:37.654 CC lib/nvmf/stubs.o 00:01:37.654 CC lib/nvmf/vfio_user.o 00:01:37.654 CC lib/scsi/scsi_rpc.o 00:01:37.654 CC lib/nbd/nbd_rpc.o 00:01:37.654 CC lib/ftl/ftl_l2p_cache.o 00:01:37.655 CC lib/nvmf/mdns_server.o 00:01:37.655 CC lib/nvmf/rdma.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:37.655 CC lib/nvmf/auth.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:37.655 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:37.655 CC lib/ftl/utils/ftl_mempool.o 00:01:37.655 CC lib/ftl/utils/ftl_conf.o 00:01:37.655 CC lib/ftl/utils/ftl_md.o 00:01:37.655 CC lib/ftl/utils/ftl_property.o 00:01:37.655 CC lib/ftl/utils/ftl_bitmap.o 00:01:37.655 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:37.655 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:37.655 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:37.655 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:37.655 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:37.655 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:37.655 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:37.655 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:37.655 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:37.655 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:37.655 CC lib/ftl/base/ftl_base_dev.o 00:01:37.913 CC lib/ftl/base/ftl_base_bdev.o 00:01:37.913 CC lib/ftl/ftl_trace.o 00:01:38.174 LIB libspdk_nbd.a 00:01:38.174 SO libspdk_nbd.so.7.0 00:01:38.436 LIB libspdk_scsi.a 00:01:38.436 SYMLINK libspdk_nbd.so 00:01:38.436 SO libspdk_scsi.so.9.0 00:01:38.436 LIB libspdk_ublk.a 00:01:38.436 SYMLINK libspdk_scsi.so 00:01:38.436 SO libspdk_ublk.so.3.0 00:01:38.436 SYMLINK libspdk_ublk.so 00:01:38.698 LIB libspdk_ftl.a 00:01:38.698 CC lib/vhost/vhost.o 00:01:38.698 CC lib/iscsi/conn.o 00:01:38.698 CC lib/vhost/vhost_rpc.o 00:01:38.698 CC lib/iscsi/init_grp.o 00:01:38.698 CC lib/vhost/vhost_scsi.o 00:01:38.698 CC lib/vhost/vhost_blk.o 00:01:38.698 CC lib/iscsi/iscsi.o 00:01:38.698 SO libspdk_ftl.so.9.0 00:01:38.698 CC lib/iscsi/md5.o 00:01:38.698 CC lib/vhost/rte_vhost_user.o 00:01:38.698 CC lib/iscsi/param.o 00:01:38.698 CC lib/iscsi/portal_grp.o 00:01:38.698 CC lib/iscsi/tgt_node.o 00:01:38.698 CC lib/iscsi/iscsi_subsystem.o 00:01:38.698 CC lib/iscsi/iscsi_rpc.o 00:01:38.698 CC lib/iscsi/task.o 00:01:39.268 SYMLINK libspdk_ftl.so 00:01:39.529 LIB libspdk_nvmf.a 00:01:39.529 SO libspdk_nvmf.so.18.0 00:01:39.789 LIB libspdk_vhost.a 00:01:39.789 SO libspdk_vhost.so.8.0 00:01:39.789 SYMLINK libspdk_nvmf.so 00:01:39.789 SYMLINK libspdk_vhost.so 00:01:40.048 LIB libspdk_iscsi.a 00:01:40.048 SO libspdk_iscsi.so.8.0 00:01:40.308 SYMLINK libspdk_iscsi.so 00:01:40.881 CC module/vfu_device/vfu_virtio.o 00:01:40.881 CC module/env_dpdk/env_dpdk_rpc.o 00:01:40.881 CC module/vfu_device/vfu_virtio_blk.o 00:01:40.881 CC module/vfu_device/vfu_virtio_scsi.o 00:01:40.881 CC module/vfu_device/vfu_virtio_rpc.o 00:01:40.881 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:40.881 LIB libspdk_env_dpdk_rpc.a 00:01:40.881 CC module/scheduler/gscheduler/gscheduler.o 00:01:40.881 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:40.881 CC module/blob/bdev/blob_bdev.o 00:01:40.881 CC module/accel/ioat/accel_ioat.o 00:01:40.881 CC module/accel/ioat/accel_ioat_rpc.o 00:01:40.881 CC module/accel/dsa/accel_dsa.o 00:01:40.881 CC module/accel/dsa/accel_dsa_rpc.o 00:01:40.881 CC module/accel/error/accel_error.o 00:01:40.881 CC module/accel/error/accel_error_rpc.o 00:01:40.881 CC module/sock/posix/posix.o 00:01:40.881 CC module/accel/iaa/accel_iaa.o 00:01:40.881 CC module/accel/iaa/accel_iaa_rpc.o 00:01:40.881 CC module/keyring/file/keyring.o 00:01:40.881 CC module/keyring/file/keyring_rpc.o 00:01:40.881 SO libspdk_env_dpdk_rpc.so.6.0 00:01:40.881 SYMLINK libspdk_env_dpdk_rpc.so 00:01:41.140 LIB libspdk_scheduler_dpdk_governor.a 00:01:41.140 LIB libspdk_scheduler_gscheduler.a 00:01:41.140 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:41.140 LIB libspdk_keyring_file.a 00:01:41.140 LIB libspdk_accel_error.a 00:01:41.140 SO libspdk_scheduler_gscheduler.so.4.0 00:01:41.140 LIB libspdk_scheduler_dynamic.a 00:01:41.140 LIB libspdk_accel_ioat.a 00:01:41.140 LIB libspdk_accel_iaa.a 00:01:41.140 SO libspdk_accel_error.so.2.0 00:01:41.140 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:41.140 SO libspdk_keyring_file.so.1.0 00:01:41.140 SO libspdk_accel_ioat.so.6.0 00:01:41.140 SO libspdk_scheduler_dynamic.so.4.0 00:01:41.140 LIB libspdk_accel_dsa.a 00:01:41.140 SO libspdk_accel_iaa.so.3.0 00:01:41.141 SYMLINK libspdk_scheduler_gscheduler.so 00:01:41.141 LIB libspdk_blob_bdev.a 00:01:41.141 SO libspdk_accel_dsa.so.5.0 00:01:41.141 SYMLINK libspdk_accel_error.so 00:01:41.141 SYMLINK libspdk_keyring_file.so 00:01:41.141 SO libspdk_blob_bdev.so.11.0 00:01:41.141 SYMLINK libspdk_scheduler_dynamic.so 00:01:41.141 SYMLINK libspdk_accel_ioat.so 00:01:41.141 SYMLINK libspdk_accel_iaa.so 00:01:41.141 SYMLINK libspdk_accel_dsa.so 00:01:41.401 SYMLINK libspdk_blob_bdev.so 00:01:41.401 LIB libspdk_vfu_device.a 00:01:41.401 SO libspdk_vfu_device.so.3.0 00:01:41.401 SYMLINK libspdk_vfu_device.so 00:01:41.661 LIB libspdk_sock_posix.a 00:01:41.661 SO libspdk_sock_posix.so.6.0 00:01:41.661 SYMLINK libspdk_sock_posix.so 00:01:41.922 CC module/bdev/nvme/bdev_nvme.o 00:01:41.922 CC module/bdev/nvme/nvme_rpc.o 00:01:41.922 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:41.922 CC module/bdev/nvme/bdev_mdns_client.o 00:01:41.922 CC module/bdev/nvme/vbdev_opal.o 00:01:41.922 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:41.922 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:41.922 CC module/bdev/delay/vbdev_delay.o 00:01:41.922 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:41.922 CC module/bdev/gpt/gpt.o 00:01:41.922 CC module/bdev/ftl/bdev_ftl.o 00:01:41.922 CC module/bdev/raid/bdev_raid.o 00:01:41.922 CC module/bdev/gpt/vbdev_gpt.o 00:01:41.922 CC module/bdev/error/vbdev_error.o 00:01:41.922 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:41.922 CC module/bdev/malloc/bdev_malloc.o 00:01:41.922 CC module/bdev/error/vbdev_error_rpc.o 00:01:41.922 CC module/bdev/raid/bdev_raid_sb.o 00:01:41.922 CC module/bdev/raid/bdev_raid_rpc.o 00:01:41.922 CC module/bdev/aio/bdev_aio.o 00:01:41.922 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:41.922 CC module/bdev/raid/raid0.o 00:01:41.922 CC module/bdev/lvol/vbdev_lvol.o 00:01:41.922 CC module/bdev/aio/bdev_aio_rpc.o 00:01:41.922 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:41.922 CC module/bdev/raid/raid1.o 00:01:41.922 CC module/bdev/raid/concat.o 00:01:41.922 CC module/blobfs/bdev/blobfs_bdev.o 00:01:41.922 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:41.922 CC module/bdev/iscsi/bdev_iscsi.o 00:01:41.922 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:41.922 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:41.922 CC module/bdev/passthru/vbdev_passthru.o 00:01:41.922 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:41.922 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:41.922 CC module/bdev/split/vbdev_split.o 00:01:41.922 CC module/bdev/split/vbdev_split_rpc.o 00:01:41.922 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:41.922 CC module/bdev/null/bdev_null.o 00:01:41.922 CC module/bdev/null/bdev_null_rpc.o 00:01:41.922 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:41.922 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:42.182 LIB libspdk_blobfs_bdev.a 00:01:42.182 SO libspdk_blobfs_bdev.so.6.0 00:01:42.182 LIB libspdk_bdev_split.a 00:01:42.182 LIB libspdk_bdev_error.a 00:01:42.182 LIB libspdk_bdev_null.a 00:01:42.182 LIB libspdk_bdev_gpt.a 00:01:42.182 SO libspdk_bdev_split.so.6.0 00:01:42.182 LIB libspdk_bdev_ftl.a 00:01:42.182 SO libspdk_bdev_error.so.6.0 00:01:42.182 LIB libspdk_bdev_passthru.a 00:01:42.182 LIB libspdk_bdev_aio.a 00:01:42.182 LIB libspdk_bdev_zone_block.a 00:01:42.182 SYMLINK libspdk_blobfs_bdev.so 00:01:42.182 SO libspdk_bdev_gpt.so.6.0 00:01:42.182 SO libspdk_bdev_null.so.6.0 00:01:42.182 SO libspdk_bdev_ftl.so.6.0 00:01:42.182 SO libspdk_bdev_passthru.so.6.0 00:01:42.182 LIB libspdk_bdev_delay.a 00:01:42.182 SO libspdk_bdev_aio.so.6.0 00:01:42.182 SO libspdk_bdev_zone_block.so.6.0 00:01:42.182 SYMLINK libspdk_bdev_split.so 00:01:42.182 LIB libspdk_bdev_malloc.a 00:01:42.182 LIB libspdk_bdev_iscsi.a 00:01:42.182 SYMLINK libspdk_bdev_error.so 00:01:42.182 SYMLINK libspdk_bdev_null.so 00:01:42.182 SYMLINK libspdk_bdev_passthru.so 00:01:42.182 SO libspdk_bdev_delay.so.6.0 00:01:42.182 SYMLINK libspdk_bdev_gpt.so 00:01:42.182 SYMLINK libspdk_bdev_ftl.so 00:01:42.182 SYMLINK libspdk_bdev_aio.so 00:01:42.182 SO libspdk_bdev_iscsi.so.6.0 00:01:42.182 SO libspdk_bdev_malloc.so.6.0 00:01:42.443 SYMLINK libspdk_bdev_zone_block.so 00:01:42.443 LIB libspdk_bdev_lvol.a 00:01:42.443 SYMLINK libspdk_bdev_delay.so 00:01:42.443 SO libspdk_bdev_lvol.so.6.0 00:01:42.443 SYMLINK libspdk_bdev_iscsi.so 00:01:42.443 SYMLINK libspdk_bdev_malloc.so 00:01:42.443 LIB libspdk_bdev_virtio.a 00:01:42.443 SO libspdk_bdev_virtio.so.6.0 00:01:42.443 SYMLINK libspdk_bdev_lvol.so 00:01:42.443 SYMLINK libspdk_bdev_virtio.so 00:01:42.704 LIB libspdk_bdev_raid.a 00:01:42.704 SO libspdk_bdev_raid.so.6.0 00:01:42.965 SYMLINK libspdk_bdev_raid.so 00:01:43.906 LIB libspdk_bdev_nvme.a 00:01:43.906 SO libspdk_bdev_nvme.so.7.0 00:01:43.906 SYMLINK libspdk_bdev_nvme.so 00:01:44.847 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:44.847 CC module/event/subsystems/scheduler/scheduler.o 00:01:44.847 CC module/event/subsystems/iobuf/iobuf.o 00:01:44.847 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:44.847 CC module/event/subsystems/vmd/vmd.o 00:01:44.847 CC module/event/subsystems/sock/sock.o 00:01:44.847 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:44.847 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:44.847 CC module/event/subsystems/keyring/keyring.o 00:01:44.847 LIB libspdk_event_sock.a 00:01:44.847 LIB libspdk_event_vfu_tgt.a 00:01:44.847 LIB libspdk_event_scheduler.a 00:01:44.847 LIB libspdk_event_keyring.a 00:01:44.847 LIB libspdk_event_vhost_blk.a 00:01:44.847 LIB libspdk_event_iobuf.a 00:01:44.847 LIB libspdk_event_vmd.a 00:01:44.847 SO libspdk_event_vfu_tgt.so.3.0 00:01:44.847 SO libspdk_event_sock.so.5.0 00:01:44.847 SO libspdk_event_scheduler.so.4.0 00:01:44.847 SO libspdk_event_vhost_blk.so.3.0 00:01:44.847 SO libspdk_event_keyring.so.1.0 00:01:44.847 SO libspdk_event_iobuf.so.3.0 00:01:44.847 SO libspdk_event_vmd.so.6.0 00:01:44.847 SYMLINK libspdk_event_vfu_tgt.so 00:01:44.847 SYMLINK libspdk_event_sock.so 00:01:44.847 SYMLINK libspdk_event_scheduler.so 00:01:44.847 SYMLINK libspdk_event_keyring.so 00:01:44.847 SYMLINK libspdk_event_vhost_blk.so 00:01:44.847 SYMLINK libspdk_event_iobuf.so 00:01:44.847 SYMLINK libspdk_event_vmd.so 00:01:45.417 CC module/event/subsystems/accel/accel.o 00:01:45.417 LIB libspdk_event_accel.a 00:01:45.417 SO libspdk_event_accel.so.6.0 00:01:45.417 SYMLINK libspdk_event_accel.so 00:01:45.988 CC module/event/subsystems/bdev/bdev.o 00:01:45.988 LIB libspdk_event_bdev.a 00:01:45.988 SO libspdk_event_bdev.so.6.0 00:01:46.248 SYMLINK libspdk_event_bdev.so 00:01:46.509 CC module/event/subsystems/scsi/scsi.o 00:01:46.509 CC module/event/subsystems/ublk/ublk.o 00:01:46.509 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:46.509 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:46.509 CC module/event/subsystems/nbd/nbd.o 00:01:46.509 LIB libspdk_event_ublk.a 00:01:46.509 LIB libspdk_event_scsi.a 00:01:46.770 LIB libspdk_event_nbd.a 00:01:46.770 SO libspdk_event_ublk.so.3.0 00:01:46.770 SO libspdk_event_scsi.so.6.0 00:01:46.770 SO libspdk_event_nbd.so.6.0 00:01:46.770 LIB libspdk_event_nvmf.a 00:01:46.770 SYMLINK libspdk_event_ublk.so 00:01:46.770 SYMLINK libspdk_event_scsi.so 00:01:46.770 SYMLINK libspdk_event_nbd.so 00:01:46.770 SO libspdk_event_nvmf.so.6.0 00:01:46.770 SYMLINK libspdk_event_nvmf.so 00:01:47.031 CC module/event/subsystems/iscsi/iscsi.o 00:01:47.031 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:47.291 LIB libspdk_event_vhost_scsi.a 00:01:47.291 LIB libspdk_event_iscsi.a 00:01:47.291 SO libspdk_event_vhost_scsi.so.3.0 00:01:47.291 SO libspdk_event_iscsi.so.6.0 00:01:47.291 SYMLINK libspdk_event_vhost_scsi.so 00:01:47.291 SYMLINK libspdk_event_iscsi.so 00:01:47.551 SO libspdk.so.6.0 00:01:47.551 SYMLINK libspdk.so 00:01:48.126 CXX app/trace/trace.o 00:01:48.126 CC app/spdk_lspci/spdk_lspci.o 00:01:48.126 CC test/rpc_client/rpc_client_test.o 00:01:48.126 CC app/spdk_nvme_discover/discovery_aer.o 00:01:48.126 CC app/trace_record/trace_record.o 00:01:48.126 CC app/spdk_nvme_perf/perf.o 00:01:48.126 CC app/spdk_top/spdk_top.o 00:01:48.126 TEST_HEADER include/spdk/accel_module.h 00:01:48.126 CC app/spdk_nvme_identify/identify.o 00:01:48.126 TEST_HEADER include/spdk/assert.h 00:01:48.126 TEST_HEADER include/spdk/accel.h 00:01:48.126 TEST_HEADER include/spdk/barrier.h 00:01:48.126 TEST_HEADER include/spdk/base64.h 00:01:48.126 TEST_HEADER include/spdk/bdev_module.h 00:01:48.126 TEST_HEADER include/spdk/bit_array.h 00:01:48.126 TEST_HEADER include/spdk/bdev_zone.h 00:01:48.126 CC app/nvmf_tgt/nvmf_main.o 00:01:48.126 TEST_HEADER include/spdk/blob_bdev.h 00:01:48.126 CC app/iscsi_tgt/iscsi_tgt.o 00:01:48.126 TEST_HEADER include/spdk/bdev.h 00:01:48.126 TEST_HEADER include/spdk/blobfs.h 00:01:48.126 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:48.127 TEST_HEADER include/spdk/bit_pool.h 00:01:48.127 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:48.127 TEST_HEADER include/spdk/conf.h 00:01:48.127 TEST_HEADER include/spdk/config.h 00:01:48.127 TEST_HEADER include/spdk/blob.h 00:01:48.127 TEST_HEADER include/spdk/cpuset.h 00:01:48.127 TEST_HEADER include/spdk/crc16.h 00:01:48.127 TEST_HEADER include/spdk/crc32.h 00:01:48.127 TEST_HEADER include/spdk/dif.h 00:01:48.127 TEST_HEADER include/spdk/crc64.h 00:01:48.127 TEST_HEADER include/spdk/dma.h 00:01:48.127 TEST_HEADER include/spdk/endian.h 00:01:48.127 TEST_HEADER include/spdk/env_dpdk.h 00:01:48.127 TEST_HEADER include/spdk/event.h 00:01:48.127 CC app/spdk_dd/spdk_dd.o 00:01:48.127 TEST_HEADER include/spdk/fd.h 00:01:48.127 TEST_HEADER include/spdk/fd_group.h 00:01:48.127 TEST_HEADER include/spdk/env.h 00:01:48.127 TEST_HEADER include/spdk/file.h 00:01:48.127 TEST_HEADER include/spdk/ftl.h 00:01:48.127 CC app/spdk_tgt/spdk_tgt.o 00:01:48.127 TEST_HEADER include/spdk/gpt_spec.h 00:01:48.127 TEST_HEADER include/spdk/hexlify.h 00:01:48.127 TEST_HEADER include/spdk/idxd.h 00:01:48.127 CC app/vhost/vhost.o 00:01:48.127 TEST_HEADER include/spdk/init.h 00:01:48.127 TEST_HEADER include/spdk/histogram_data.h 00:01:48.127 TEST_HEADER include/spdk/idxd_spec.h 00:01:48.127 TEST_HEADER include/spdk/ioat_spec.h 00:01:48.127 TEST_HEADER include/spdk/json.h 00:01:48.127 TEST_HEADER include/spdk/ioat.h 00:01:48.127 TEST_HEADER include/spdk/iscsi_spec.h 00:01:48.127 TEST_HEADER include/spdk/jsonrpc.h 00:01:48.127 TEST_HEADER include/spdk/keyring_module.h 00:01:48.127 TEST_HEADER include/spdk/likely.h 00:01:48.127 TEST_HEADER include/spdk/keyring.h 00:01:48.127 TEST_HEADER include/spdk/lvol.h 00:01:48.127 TEST_HEADER include/spdk/log.h 00:01:48.127 TEST_HEADER include/spdk/memory.h 00:01:48.127 TEST_HEADER include/spdk/nbd.h 00:01:48.127 TEST_HEADER include/spdk/mmio.h 00:01:48.127 TEST_HEADER include/spdk/notify.h 00:01:48.127 TEST_HEADER include/spdk/nvme.h 00:01:48.127 TEST_HEADER include/spdk/nvme_intel.h 00:01:48.127 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:48.127 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:48.127 TEST_HEADER include/spdk/nvme_spec.h 00:01:48.127 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:48.127 TEST_HEADER include/spdk/nvme_zns.h 00:01:48.127 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:48.127 TEST_HEADER include/spdk/nvmf_spec.h 00:01:48.127 TEST_HEADER include/spdk/nvmf.h 00:01:48.127 TEST_HEADER include/spdk/nvmf_transport.h 00:01:48.127 TEST_HEADER include/spdk/opal.h 00:01:48.127 TEST_HEADER include/spdk/opal_spec.h 00:01:48.127 TEST_HEADER include/spdk/pipe.h 00:01:48.127 TEST_HEADER include/spdk/pci_ids.h 00:01:48.127 TEST_HEADER include/spdk/reduce.h 00:01:48.127 TEST_HEADER include/spdk/queue.h 00:01:48.127 TEST_HEADER include/spdk/rpc.h 00:01:48.127 TEST_HEADER include/spdk/scheduler.h 00:01:48.127 TEST_HEADER include/spdk/scsi.h 00:01:48.127 TEST_HEADER include/spdk/scsi_spec.h 00:01:48.127 TEST_HEADER include/spdk/sock.h 00:01:48.127 TEST_HEADER include/spdk/string.h 00:01:48.127 TEST_HEADER include/spdk/thread.h 00:01:48.127 TEST_HEADER include/spdk/stdinc.h 00:01:48.127 TEST_HEADER include/spdk/trace.h 00:01:48.127 TEST_HEADER include/spdk/trace_parser.h 00:01:48.127 TEST_HEADER include/spdk/tree.h 00:01:48.127 TEST_HEADER include/spdk/ublk.h 00:01:48.127 TEST_HEADER include/spdk/util.h 00:01:48.127 TEST_HEADER include/spdk/uuid.h 00:01:48.127 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:48.127 TEST_HEADER include/spdk/version.h 00:01:48.127 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:48.127 TEST_HEADER include/spdk/vhost.h 00:01:48.127 TEST_HEADER include/spdk/vmd.h 00:01:48.127 TEST_HEADER include/spdk/xor.h 00:01:48.127 TEST_HEADER include/spdk/zipf.h 00:01:48.127 CXX test/cpp_headers/accel.o 00:01:48.127 CXX test/cpp_headers/accel_module.o 00:01:48.127 CXX test/cpp_headers/assert.o 00:01:48.127 CXX test/cpp_headers/barrier.o 00:01:48.127 CXX test/cpp_headers/base64.o 00:01:48.127 CXX test/cpp_headers/bdev.o 00:01:48.127 CXX test/cpp_headers/bdev_module.o 00:01:48.127 CXX test/cpp_headers/bit_array.o 00:01:48.127 CXX test/cpp_headers/bdev_zone.o 00:01:48.127 CXX test/cpp_headers/blob_bdev.o 00:01:48.127 CXX test/cpp_headers/bit_pool.o 00:01:48.127 CXX test/cpp_headers/blobfs_bdev.o 00:01:48.127 CXX test/cpp_headers/blobfs.o 00:01:48.127 CXX test/cpp_headers/conf.o 00:01:48.127 CXX test/cpp_headers/blob.o 00:01:48.127 CXX test/cpp_headers/config.o 00:01:48.127 CXX test/cpp_headers/cpuset.o 00:01:48.127 CXX test/cpp_headers/crc16.o 00:01:48.127 CXX test/cpp_headers/crc32.o 00:01:48.127 CXX test/cpp_headers/crc64.o 00:01:48.127 CXX test/cpp_headers/dif.o 00:01:48.127 CXX test/cpp_headers/endian.o 00:01:48.127 CXX test/cpp_headers/env_dpdk.o 00:01:48.127 CXX test/cpp_headers/dma.o 00:01:48.127 CXX test/cpp_headers/env.o 00:01:48.127 CXX test/cpp_headers/fd_group.o 00:01:48.127 CXX test/cpp_headers/event.o 00:01:48.127 CXX test/cpp_headers/fd.o 00:01:48.127 CXX test/cpp_headers/file.o 00:01:48.127 CXX test/cpp_headers/ftl.o 00:01:48.127 CXX test/cpp_headers/hexlify.o 00:01:48.127 CXX test/cpp_headers/gpt_spec.o 00:01:48.127 CXX test/cpp_headers/idxd.o 00:01:48.127 CXX test/cpp_headers/histogram_data.o 00:01:48.127 CXX test/cpp_headers/idxd_spec.o 00:01:48.127 CXX test/cpp_headers/init.o 00:01:48.127 CXX test/cpp_headers/ioat.o 00:01:48.127 CXX test/cpp_headers/iscsi_spec.o 00:01:48.127 CXX test/cpp_headers/json.o 00:01:48.127 CXX test/cpp_headers/ioat_spec.o 00:01:48.127 CXX test/cpp_headers/jsonrpc.o 00:01:48.127 CXX test/cpp_headers/keyring_module.o 00:01:48.127 CXX test/cpp_headers/keyring.o 00:01:48.127 CXX test/cpp_headers/likely.o 00:01:48.127 CXX test/cpp_headers/log.o 00:01:48.127 CXX test/cpp_headers/lvol.o 00:01:48.127 CXX test/cpp_headers/memory.o 00:01:48.127 CXX test/cpp_headers/mmio.o 00:01:48.127 CXX test/cpp_headers/nbd.o 00:01:48.127 CXX test/cpp_headers/notify.o 00:01:48.127 CXX test/cpp_headers/nvme.o 00:01:48.127 CXX test/cpp_headers/nvme_ocssd.o 00:01:48.127 CXX test/cpp_headers/nvme_intel.o 00:01:48.127 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:48.127 CXX test/cpp_headers/nvme_spec.o 00:01:48.127 CXX test/cpp_headers/nvmf_cmd.o 00:01:48.127 CXX test/cpp_headers/nvme_zns.o 00:01:48.127 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:48.127 CXX test/cpp_headers/nvmf_spec.o 00:01:48.127 CXX test/cpp_headers/nvmf.o 00:01:48.127 CXX test/cpp_headers/nvmf_transport.o 00:01:48.127 CXX test/cpp_headers/opal.o 00:01:48.127 CXX test/cpp_headers/opal_spec.o 00:01:48.127 CXX test/cpp_headers/pipe.o 00:01:48.127 CXX test/cpp_headers/pci_ids.o 00:01:48.127 CXX test/cpp_headers/queue.o 00:01:48.127 CXX test/cpp_headers/reduce.o 00:01:48.127 CXX test/cpp_headers/rpc.o 00:01:48.127 CC examples/nvme/hello_world/hello_world.o 00:01:48.127 CXX test/cpp_headers/scheduler.o 00:01:48.395 CC examples/nvme/reconnect/reconnect.o 00:01:48.395 CC examples/nvme/arbitration/arbitration.o 00:01:48.395 CC examples/sock/hello_world/hello_sock.o 00:01:48.395 CXX test/cpp_headers/scsi.o 00:01:48.395 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:48.395 CC examples/nvme/abort/abort.o 00:01:48.395 CC examples/vmd/led/led.o 00:01:48.395 CC test/app/stub/stub.o 00:01:48.395 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:48.395 CC examples/ioat/perf/perf.o 00:01:48.395 CC test/app/histogram_perf/histogram_perf.o 00:01:48.395 CC test/env/vtophys/vtophys.o 00:01:48.395 CC test/env/pci/pci_ut.o 00:01:48.395 CC examples/ioat/verify/verify.o 00:01:48.395 CC test/app/jsoncat/jsoncat.o 00:01:48.395 CC examples/accel/perf/accel_perf.o 00:01:48.395 CC examples/idxd/perf/perf.o 00:01:48.395 CC test/env/memory/memory_ut.o 00:01:48.395 CC examples/vmd/lsvmd/lsvmd.o 00:01:48.395 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:48.395 CC test/event/reactor_perf/reactor_perf.o 00:01:48.395 CC examples/bdev/hello_world/hello_bdev.o 00:01:48.395 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:48.395 CC test/nvme/startup/startup.o 00:01:48.395 CC test/nvme/connect_stress/connect_stress.o 00:01:48.395 CC test/event/event_perf/event_perf.o 00:01:48.395 CC examples/thread/thread/thread_ex.o 00:01:48.395 CC test/event/reactor/reactor.o 00:01:48.395 CC app/fio/nvme/fio_plugin.o 00:01:48.395 CC examples/util/zipf/zipf.o 00:01:48.395 CC test/nvme/overhead/overhead.o 00:01:48.395 CC test/nvme/reset/reset.o 00:01:48.395 CC test/nvme/err_injection/err_injection.o 00:01:48.395 CC examples/nvme/hotplug/hotplug.o 00:01:48.395 CC test/nvme/reserve/reserve.o 00:01:48.395 CC test/nvme/compliance/nvme_compliance.o 00:01:48.395 CC examples/nvmf/nvmf/nvmf.o 00:01:48.395 CC test/bdev/bdevio/bdevio.o 00:01:48.395 CC test/nvme/simple_copy/simple_copy.o 00:01:48.395 CC test/nvme/cuse/cuse.o 00:01:48.395 CC test/thread/poller_perf/poller_perf.o 00:01:48.395 CC test/accel/dif/dif.o 00:01:48.395 CC test/nvme/sgl/sgl.o 00:01:48.395 CC examples/blob/hello_world/hello_blob.o 00:01:48.395 CC test/app/bdev_svc/bdev_svc.o 00:01:48.395 CC test/nvme/boot_partition/boot_partition.o 00:01:48.395 CC test/nvme/aer/aer.o 00:01:48.395 CC examples/bdev/bdevperf/bdevperf.o 00:01:48.395 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:48.395 CC examples/blob/cli/blobcli.o 00:01:48.395 CC test/nvme/e2edp/nvme_dp.o 00:01:48.395 CC test/nvme/fused_ordering/fused_ordering.o 00:01:48.396 CC test/event/app_repeat/app_repeat.o 00:01:48.396 CC test/dma/test_dma/test_dma.o 00:01:48.396 CC test/nvme/fdp/fdp.o 00:01:48.396 CC test/event/scheduler/scheduler.o 00:01:48.396 CC test/blobfs/mkfs/mkfs.o 00:01:48.396 CC app/fio/bdev/fio_plugin.o 00:01:48.396 LINK spdk_lspci 00:01:48.661 LINK rpc_client_test 00:01:48.661 LINK nvmf_tgt 00:01:48.661 LINK interrupt_tgt 00:01:48.661 LINK spdk_nvme_discover 00:01:48.661 LINK iscsi_tgt 00:01:48.925 CC test/env/mem_callbacks/mem_callbacks.o 00:01:48.925 LINK vhost 00:01:48.925 LINK spdk_trace_record 00:01:48.925 CC test/lvol/esnap/esnap.o 00:01:48.925 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:48.925 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:48.925 LINK led 00:01:48.925 LINK vtophys 00:01:48.925 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:48.925 LINK spdk_tgt 00:01:48.925 LINK lsvmd 00:01:48.925 LINK startup 00:01:48.925 LINK event_perf 00:01:48.925 LINK env_dpdk_post_init 00:01:48.925 LINK reactor_perf 00:01:48.925 LINK jsoncat 00:01:48.925 LINK histogram_perf 00:01:48.925 LINK pmr_persistence 00:01:48.925 LINK bdev_svc 00:01:49.184 LINK err_injection 00:01:49.184 LINK stub 00:01:49.184 LINK ioat_perf 00:01:49.184 LINK reactor 00:01:49.184 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:49.184 LINK connect_stress 00:01:49.184 CXX test/cpp_headers/scsi_spec.o 00:01:49.184 LINK verify 00:01:49.184 LINK cmb_copy 00:01:49.184 CXX test/cpp_headers/sock.o 00:01:49.184 CXX test/cpp_headers/stdinc.o 00:01:49.184 CXX test/cpp_headers/string.o 00:01:49.184 CXX test/cpp_headers/thread.o 00:01:49.184 CXX test/cpp_headers/trace.o 00:01:49.184 LINK poller_perf 00:01:49.184 CXX test/cpp_headers/trace_parser.o 00:01:49.184 CXX test/cpp_headers/tree.o 00:01:49.184 LINK zipf 00:01:49.184 CXX test/cpp_headers/ublk.o 00:01:49.184 CXX test/cpp_headers/uuid.o 00:01:49.184 CXX test/cpp_headers/util.o 00:01:49.184 CXX test/cpp_headers/version.o 00:01:49.184 CXX test/cpp_headers/vfio_user_pci.o 00:01:49.184 CXX test/cpp_headers/vfio_user_spec.o 00:01:49.184 CXX test/cpp_headers/vhost.o 00:01:49.184 CXX test/cpp_headers/vmd.o 00:01:49.184 CXX test/cpp_headers/xor.o 00:01:49.184 CXX test/cpp_headers/zipf.o 00:01:49.184 LINK fused_ordering 00:01:49.184 LINK app_repeat 00:01:49.184 LINK boot_partition 00:01:49.184 LINK hello_sock 00:01:49.184 LINK spdk_dd 00:01:49.184 LINK doorbell_aers 00:01:49.184 LINK hello_world 00:01:49.184 LINK reserve 00:01:49.184 LINK arbitration 00:01:49.184 LINK mkfs 00:01:49.184 LINK scheduler 00:01:49.184 LINK hello_bdev 00:01:49.184 LINK hello_blob 00:01:49.184 LINK nvme_compliance 00:01:49.184 LINK sgl 00:01:49.184 LINK aer 00:01:49.184 LINK hotplug 00:01:49.184 LINK abort 00:01:49.184 LINK thread 00:01:49.184 LINK overhead 00:01:49.184 LINK nvmf 00:01:49.184 LINK simple_copy 00:01:49.445 LINK nvme_dp 00:01:49.445 LINK reset 00:01:49.445 LINK idxd_perf 00:01:49.445 LINK bdevio 00:01:49.445 LINK reconnect 00:01:49.445 LINK spdk_trace 00:01:49.445 LINK dif 00:01:49.445 LINK fdp 00:01:49.445 LINK accel_perf 00:01:49.445 LINK pci_ut 00:01:49.445 LINK test_dma 00:01:49.445 LINK spdk_bdev 00:01:49.445 LINK nvme_manage 00:01:49.445 LINK spdk_nvme 00:01:49.445 LINK blobcli 00:01:49.705 LINK nvme_fuzz 00:01:49.705 LINK spdk_nvme_perf 00:01:49.705 LINK spdk_top 00:01:49.705 LINK vhost_fuzz 00:01:49.705 LINK spdk_nvme_identify 00:01:49.705 LINK mem_callbacks 00:01:49.705 LINK bdevperf 00:01:49.705 LINK memory_ut 00:01:49.965 LINK cuse 00:01:50.535 LINK iscsi_fuzz 00:01:53.076 LINK esnap 00:01:53.076 00:01:53.076 real 0m48.926s 00:01:53.076 user 6m38.529s 00:01:53.076 sys 5m6.718s 00:01:53.076 16:46:31 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:53.076 16:46:31 make -- common/autotest_common.sh@10 -- $ set +x 00:01:53.076 ************************************ 00:01:53.076 END TEST make 00:01:53.077 ************************************ 00:01:53.077 16:46:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:53.077 16:46:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:53.077 16:46:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:53.077 16:46:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.077 16:46:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:53.077 16:46:31 -- pm/common@44 -- $ pid=1123344 00:01:53.077 16:46:31 -- pm/common@50 -- $ kill -TERM 1123344 00:01:53.077 16:46:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.077 16:46:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:53.077 16:46:31 -- pm/common@44 -- $ pid=1123345 00:01:53.077 16:46:31 -- pm/common@50 -- $ kill -TERM 1123345 00:01:53.077 16:46:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.077 16:46:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:53.077 16:46:31 -- pm/common@44 -- $ pid=1123348 00:01:53.077 16:46:31 -- pm/common@50 -- $ kill -TERM 1123348 00:01:53.077 16:46:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.077 16:46:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:53.077 16:46:31 -- pm/common@44 -- $ pid=1123373 00:01:53.077 16:46:31 -- pm/common@50 -- $ sudo -E kill -TERM 1123373 00:01:53.337 16:46:32 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:53.337 16:46:32 -- nvmf/common.sh@7 -- # uname -s 00:01:53.337 16:46:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:53.337 16:46:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:53.337 16:46:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:53.337 16:46:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:53.337 16:46:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:53.337 16:46:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:53.337 16:46:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:53.337 16:46:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:53.337 16:46:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:53.337 16:46:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:53.337 16:46:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:01:53.337 16:46:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:01:53.337 16:46:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:53.337 16:46:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:53.337 16:46:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:53.337 16:46:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:53.337 16:46:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:53.337 16:46:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:53.337 16:46:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.337 16:46:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.337 16:46:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.337 16:46:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.337 16:46:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.337 16:46:32 -- paths/export.sh@5 -- # export PATH 00:01:53.337 16:46:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.337 16:46:32 -- nvmf/common.sh@47 -- # : 0 00:01:53.337 16:46:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:53.337 16:46:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:53.337 16:46:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:53.337 16:46:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:53.337 16:46:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:53.337 16:46:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:53.337 16:46:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:53.337 16:46:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:53.338 16:46:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:53.338 16:46:32 -- spdk/autotest.sh@32 -- # uname -s 00:01:53.338 16:46:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:53.338 16:46:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:53.338 16:46:32 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:53.338 16:46:32 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:53.338 16:46:32 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:53.338 16:46:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:53.338 16:46:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:53.338 16:46:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:53.338 16:46:32 -- spdk/autotest.sh@48 -- # udevadm_pid=1185505 00:01:53.338 16:46:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:53.338 16:46:32 -- pm/common@17 -- # local monitor 00:01:53.338 16:46:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:53.338 16:46:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.338 16:46:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.338 16:46:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.338 16:46:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.338 16:46:32 -- pm/common@21 -- # date +%s 00:01:53.338 16:46:32 -- pm/common@21 -- # date +%s 00:01:53.338 16:46:32 -- pm/common@25 -- # sleep 1 00:01:53.338 16:46:32 -- pm/common@21 -- # date +%s 00:01:53.338 16:46:32 -- pm/common@21 -- # date +%s 00:01:53.338 16:46:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715784392 00:01:53.338 16:46:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715784392 00:01:53.338 16:46:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715784392 00:01:53.338 16:46:32 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715784392 00:01:53.338 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715784392_collect-vmstat.pm.log 00:01:53.338 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715784392_collect-cpu-load.pm.log 00:01:53.338 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715784392_collect-cpu-temp.pm.log 00:01:53.338 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715784392_collect-bmc-pm.bmc.pm.log 00:01:54.278 16:46:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:54.278 16:46:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:54.278 16:46:33 -- common/autotest_common.sh@720 -- # xtrace_disable 00:01:54.278 16:46:33 -- common/autotest_common.sh@10 -- # set +x 00:01:54.278 16:46:33 -- spdk/autotest.sh@59 -- # create_test_list 00:01:54.278 16:46:33 -- common/autotest_common.sh@744 -- # xtrace_disable 00:01:54.278 16:46:33 -- common/autotest_common.sh@10 -- # set +x 00:01:54.538 16:46:33 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:54.538 16:46:33 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.538 16:46:33 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.538 16:46:33 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:54.538 16:46:33 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.538 16:46:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:54.538 16:46:33 -- common/autotest_common.sh@1451 -- # uname 00:01:54.538 16:46:33 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:01:54.538 16:46:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:54.539 16:46:33 -- common/autotest_common.sh@1471 -- # uname 00:01:54.539 16:46:33 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:01:54.539 16:46:33 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:54.539 16:46:33 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:54.539 16:46:33 -- spdk/autotest.sh@72 -- # hash lcov 00:01:54.539 16:46:33 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:54.539 16:46:33 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:54.539 --rc lcov_branch_coverage=1 00:01:54.539 --rc lcov_function_coverage=1 00:01:54.539 --rc genhtml_branch_coverage=1 00:01:54.539 --rc genhtml_function_coverage=1 00:01:54.539 --rc genhtml_legend=1 00:01:54.539 --rc geninfo_all_blocks=1 00:01:54.539 ' 00:01:54.539 16:46:33 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:54.539 --rc lcov_branch_coverage=1 00:01:54.539 --rc lcov_function_coverage=1 00:01:54.539 --rc genhtml_branch_coverage=1 00:01:54.539 --rc genhtml_function_coverage=1 00:01:54.539 --rc genhtml_legend=1 00:01:54.539 --rc geninfo_all_blocks=1 00:01:54.539 ' 00:01:54.539 16:46:33 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:54.539 --rc lcov_branch_coverage=1 00:01:54.539 --rc lcov_function_coverage=1 00:01:54.539 --rc genhtml_branch_coverage=1 00:01:54.539 --rc genhtml_function_coverage=1 00:01:54.539 --rc genhtml_legend=1 00:01:54.539 --rc geninfo_all_blocks=1 00:01:54.539 --no-external' 00:01:54.539 16:46:33 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:54.539 --rc lcov_branch_coverage=1 00:01:54.539 --rc lcov_function_coverage=1 00:01:54.539 --rc genhtml_branch_coverage=1 00:01:54.539 --rc genhtml_function_coverage=1 00:01:54.539 --rc genhtml_legend=1 00:01:54.539 --rc geninfo_all_blocks=1 00:01:54.539 --no-external' 00:01:54.539 16:46:33 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:54.539 lcov: LCOV version 1.14 00:01:54.539 16:46:33 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:06.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:06.789 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:06.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:06.789 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:06.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:06.789 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:06.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:06.789 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:21.696 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:21.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:21.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:21.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:21.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:21.698 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:23.084 16:47:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:23.084 16:47:01 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:23.084 16:47:01 -- common/autotest_common.sh@10 -- # set +x 00:02:23.084 16:47:01 -- spdk/autotest.sh@91 -- # rm -f 00:02:23.084 16:47:01 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:26.385 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:26.385 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:26.386 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:26.386 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:26.646 16:47:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:26.646 16:47:05 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:26.646 16:47:05 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:26.646 16:47:05 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:26.646 16:47:05 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:26.646 16:47:05 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:26.646 16:47:05 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:26.646 16:47:05 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:26.646 16:47:05 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:26.646 16:47:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:26.646 16:47:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:26.646 16:47:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:26.646 16:47:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:26.646 16:47:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:26.646 16:47:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:26.646 No valid GPT data, bailing 00:02:26.646 16:47:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:26.646 16:47:05 -- scripts/common.sh@391 -- # pt= 00:02:26.646 16:47:05 -- scripts/common.sh@392 -- # return 1 00:02:26.646 16:47:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:26.646 1+0 records in 00:02:26.646 1+0 records out 00:02:26.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525842 s, 199 MB/s 00:02:26.646 16:47:05 -- spdk/autotest.sh@118 -- # sync 00:02:26.646 16:47:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:26.646 16:47:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:26.646 16:47:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:34.785 16:47:13 -- spdk/autotest.sh@124 -- # uname -s 00:02:34.785 16:47:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:34.785 16:47:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:34.785 16:47:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:34.785 16:47:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:34.785 16:47:13 -- common/autotest_common.sh@10 -- # set +x 00:02:34.785 ************************************ 00:02:34.785 START TEST setup.sh 00:02:34.785 ************************************ 00:02:34.785 16:47:13 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:34.785 * Looking for test storage... 00:02:34.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:34.785 16:47:13 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:34.785 16:47:13 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:34.785 16:47:13 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:34.785 16:47:13 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:34.785 16:47:13 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:34.785 16:47:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:34.785 ************************************ 00:02:34.785 START TEST acl 00:02:34.785 ************************************ 00:02:34.785 16:47:13 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:34.785 * Looking for test storage... 00:02:34.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:34.785 16:47:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:34.785 16:47:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:35.045 16:47:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:35.045 16:47:13 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:35.045 16:47:13 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:35.045 16:47:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:35.045 16:47:13 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:35.045 16:47:13 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:35.045 16:47:13 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:35.045 16:47:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:35.045 16:47:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:35.045 16:47:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:35.045 16:47:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:35.045 16:47:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:35.045 16:47:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:35.045 16:47:13 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:39.250 16:47:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:39.250 16:47:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:39.250 16:47:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:39.250 16:47:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:39.250 16:47:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.250 16:47:17 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:41.798 Hugepages 00:02:41.798 node hugesize free / total 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.798 00:02:41.798 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.798 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:41.799 16:47:20 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:41.799 16:47:20 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:41.799 16:47:20 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:41.799 16:47:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:41.799 ************************************ 00:02:41.799 START TEST denied 00:02:41.799 ************************************ 00:02:41.799 16:47:20 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:41.799 16:47:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:41.799 16:47:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:41.799 16:47:20 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:41.799 16:47:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.799 16:47:20 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:46.000 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:02:46.000 16:47:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:02:46.000 16:47:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:46.000 16:47:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:46.000 16:47:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:02:46.000 16:47:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:02:46.000 16:47:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:46.000 16:47:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:46.000 16:47:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:46.000 16:47:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.000 16:47:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.205 00:02:50.205 real 0m8.450s 00:02:50.205 user 0m2.755s 00:02:50.205 sys 0m4.986s 00:02:50.205 16:47:28 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:50.205 16:47:28 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:50.205 ************************************ 00:02:50.205 END TEST denied 00:02:50.205 ************************************ 00:02:50.205 16:47:29 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:50.205 16:47:29 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:50.205 16:47:29 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:50.205 16:47:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:50.465 ************************************ 00:02:50.465 START TEST allowed 00:02:50.465 ************************************ 00:02:50.465 16:47:29 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:50.465 16:47:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:02:50.465 16:47:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:50.466 16:47:29 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:02:50.466 16:47:29 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.466 16:47:29 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:55.810 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:02:55.810 16:47:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:55.810 16:47:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:55.810 16:47:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:55.810 16:47:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.810 16:47:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.020 00:03:00.020 real 0m9.477s 00:03:00.020 user 0m2.666s 00:03:00.020 sys 0m5.058s 00:03:00.020 16:47:38 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:00.020 16:47:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:00.020 ************************************ 00:03:00.020 END TEST allowed 00:03:00.020 ************************************ 00:03:00.020 00:03:00.020 real 0m25.078s 00:03:00.020 user 0m8.043s 00:03:00.020 sys 0m14.686s 00:03:00.020 16:47:38 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:00.020 16:47:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:00.020 ************************************ 00:03:00.020 END TEST acl 00:03:00.020 ************************************ 00:03:00.020 16:47:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:00.020 16:47:38 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:00.020 16:47:38 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:00.020 16:47:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:00.020 ************************************ 00:03:00.020 START TEST hugepages 00:03:00.020 ************************************ 00:03:00.020 16:47:38 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:00.020 * Looking for test storage... 00:03:00.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.020 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 107355264 kB' 'MemAvailable: 110722756 kB' 'Buffers: 2696 kB' 'Cached: 10496000 kB' 'SwapCached: 0 kB' 'Active: 7480956 kB' 'Inactive: 3486216 kB' 'Active(anon): 6915580 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471936 kB' 'Mapped: 204108 kB' 'Shmem: 6447104 kB' 'KReclaimable: 280464 kB' 'Slab: 1037492 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 757028 kB' 'KernelStack: 26960 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 8289612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 233968 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.021 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:00.022 16:47:38 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:00.022 16:47:38 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:00.022 16:47:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:00.022 16:47:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:00.285 ************************************ 00:03:00.285 START TEST default_setup 00:03:00.285 ************************************ 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.285 16:47:38 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:03.589 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:03.589 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109510748 kB' 'MemAvailable: 112878240 kB' 'Buffers: 2696 kB' 'Cached: 10496120 kB' 'SwapCached: 0 kB' 'Active: 7492948 kB' 'Inactive: 3486216 kB' 'Active(anon): 6927572 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483868 kB' 'Mapped: 203888 kB' 'Shmem: 6447224 kB' 'KReclaimable: 280464 kB' 'Slab: 1035100 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754636 kB' 'KernelStack: 26912 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8301376 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234000 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.852 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109510916 kB' 'MemAvailable: 112878408 kB' 'Buffers: 2696 kB' 'Cached: 10496124 kB' 'SwapCached: 0 kB' 'Active: 7492484 kB' 'Inactive: 3486216 kB' 'Active(anon): 6927108 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483400 kB' 'Mapped: 203860 kB' 'Shmem: 6447228 kB' 'KReclaimable: 280464 kB' 'Slab: 1035092 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754628 kB' 'KernelStack: 26896 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8301396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 233984 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.853 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.854 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109511108 kB' 'MemAvailable: 112878600 kB' 'Buffers: 2696 kB' 'Cached: 10496124 kB' 'SwapCached: 0 kB' 'Active: 7492140 kB' 'Inactive: 3486216 kB' 'Active(anon): 6926764 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483016 kB' 'Mapped: 203860 kB' 'Shmem: 6447228 kB' 'KReclaimable: 280464 kB' 'Slab: 1035152 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754688 kB' 'KernelStack: 26912 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8301416 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 233984 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.855 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.856 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.117 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:04.118 nr_hugepages=1024 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.118 resv_hugepages=0 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.118 surplus_hugepages=0 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.118 anon_hugepages=0 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109512540 kB' 'MemAvailable: 112880032 kB' 'Buffers: 2696 kB' 'Cached: 10496160 kB' 'SwapCached: 0 kB' 'Active: 7492144 kB' 'Inactive: 3486216 kB' 'Active(anon): 6926768 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482996 kB' 'Mapped: 203860 kB' 'Shmem: 6447264 kB' 'KReclaimable: 280464 kB' 'Slab: 1035152 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754688 kB' 'KernelStack: 26928 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8302676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 233968 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.118 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.119 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52699028 kB' 'MemUsed: 12959980 kB' 'SwapCached: 0 kB' 'Active: 5084532 kB' 'Inactive: 3253748 kB' 'Active(anon): 4670064 kB' 'Inactive(anon): 0 kB' 'Active(file): 414468 kB' 'Inactive(file): 3253748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8176124 kB' 'Mapped: 119372 kB' 'AnonPages: 165496 kB' 'Shmem: 4507908 kB' 'KernelStack: 13928 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201776 kB' 'Slab: 667776 kB' 'SReclaimable: 201776 kB' 'SUnreclaim: 466000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.120 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:04.121 node0=1024 expecting 1024 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:04.121 00:03:04.121 real 0m3.897s 00:03:04.121 user 0m1.520s 00:03:04.121 sys 0m2.400s 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:04.121 16:47:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:04.121 ************************************ 00:03:04.121 END TEST default_setup 00:03:04.121 ************************************ 00:03:04.121 16:47:42 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:04.121 16:47:42 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:04.121 16:47:42 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:04.121 16:47:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:04.121 ************************************ 00:03:04.121 START TEST per_node_1G_alloc 00:03:04.121 ************************************ 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.121 16:47:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.420 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:07.420 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:07.420 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109539068 kB' 'MemAvailable: 112906560 kB' 'Buffers: 2696 kB' 'Cached: 10496276 kB' 'SwapCached: 0 kB' 'Active: 7492276 kB' 'Inactive: 3486216 kB' 'Active(anon): 6926900 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482876 kB' 'Mapped: 202776 kB' 'Shmem: 6447380 kB' 'KReclaimable: 280464 kB' 'Slab: 1034848 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754384 kB' 'KernelStack: 26944 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8291776 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234272 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.686 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109538452 kB' 'MemAvailable: 112905944 kB' 'Buffers: 2696 kB' 'Cached: 10496280 kB' 'SwapCached: 0 kB' 'Active: 7493656 kB' 'Inactive: 3486216 kB' 'Active(anon): 6928280 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483944 kB' 'Mapped: 203180 kB' 'Shmem: 6447384 kB' 'KReclaimable: 280464 kB' 'Slab: 1034860 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754396 kB' 'KernelStack: 26928 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8294340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234272 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.687 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109533516 kB' 'MemAvailable: 112901008 kB' 'Buffers: 2696 kB' 'Cached: 10496296 kB' 'SwapCached: 0 kB' 'Active: 7496872 kB' 'Inactive: 3486216 kB' 'Active(anon): 6931496 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487408 kB' 'Mapped: 203496 kB' 'Shmem: 6447400 kB' 'KReclaimable: 280464 kB' 'Slab: 1034860 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754396 kB' 'KernelStack: 26992 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8296328 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234244 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.688 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.689 nr_hugepages=1024 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.689 resv_hugepages=0 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.689 surplus_hugepages=0 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.689 anon_hugepages=0 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109534420 kB' 'MemAvailable: 112901912 kB' 'Buffers: 2696 kB' 'Cached: 10496320 kB' 'SwapCached: 0 kB' 'Active: 7491288 kB' 'Inactive: 3486216 kB' 'Active(anon): 6925912 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482296 kB' 'Mapped: 202992 kB' 'Shmem: 6447424 kB' 'KReclaimable: 280464 kB' 'Slab: 1034860 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754396 kB' 'KernelStack: 26976 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8291840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234256 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.689 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53756124 kB' 'MemUsed: 11902884 kB' 'SwapCached: 0 kB' 'Active: 5085556 kB' 'Inactive: 3253748 kB' 'Active(anon): 4671088 kB' 'Inactive(anon): 0 kB' 'Active(file): 414468 kB' 'Inactive(file): 3253748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8176280 kB' 'Mapped: 118156 kB' 'AnonPages: 166260 kB' 'Shmem: 4508064 kB' 'KernelStack: 13896 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201776 kB' 'Slab: 667544 kB' 'SReclaimable: 201776 kB' 'SUnreclaim: 465768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.690 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679876 kB' 'MemFree: 55778264 kB' 'MemUsed: 4901612 kB' 'SwapCached: 0 kB' 'Active: 2406132 kB' 'Inactive: 232468 kB' 'Active(anon): 2255224 kB' 'Inactive(anon): 0 kB' 'Active(file): 150908 kB' 'Inactive(file): 232468 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2322760 kB' 'Mapped: 84520 kB' 'AnonPages: 315916 kB' 'Shmem: 1939384 kB' 'KernelStack: 13016 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78688 kB' 'Slab: 367300 kB' 'SReclaimable: 78688 kB' 'SUnreclaim: 288612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.691 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:07.692 node0=512 expecting 512 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:07.692 node1=512 expecting 512 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:07.692 00:03:07.692 real 0m3.633s 00:03:07.692 user 0m1.411s 00:03:07.692 sys 0m2.252s 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:07.692 16:47:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:07.692 ************************************ 00:03:07.692 END TEST per_node_1G_alloc 00:03:07.692 ************************************ 00:03:07.952 16:47:46 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:07.952 16:47:46 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:07.952 16:47:46 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:07.952 16:47:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:07.952 ************************************ 00:03:07.952 START TEST even_2G_alloc 00:03:07.952 ************************************ 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.952 16:47:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.250 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:11.250 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109576628 kB' 'MemAvailable: 112944120 kB' 'Buffers: 2696 kB' 'Cached: 10496476 kB' 'SwapCached: 0 kB' 'Active: 7494508 kB' 'Inactive: 3486216 kB' 'Active(anon): 6929132 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484980 kB' 'Mapped: 202800 kB' 'Shmem: 6447580 kB' 'KReclaimable: 280464 kB' 'Slab: 1034896 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754432 kB' 'KernelStack: 27088 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8292876 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234368 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.250 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.251 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109579056 kB' 'MemAvailable: 112946548 kB' 'Buffers: 2696 kB' 'Cached: 10496480 kB' 'SwapCached: 0 kB' 'Active: 7496096 kB' 'Inactive: 3486216 kB' 'Active(anon): 6930720 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486656 kB' 'Mapped: 202756 kB' 'Shmem: 6447584 kB' 'KReclaimable: 280464 kB' 'Slab: 1034956 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754492 kB' 'KernelStack: 27120 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8292648 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234320 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.252 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.518 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109581292 kB' 'MemAvailable: 112948784 kB' 'Buffers: 2696 kB' 'Cached: 10496484 kB' 'SwapCached: 0 kB' 'Active: 7495924 kB' 'Inactive: 3486216 kB' 'Active(anon): 6930548 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486564 kB' 'Mapped: 202756 kB' 'Shmem: 6447588 kB' 'KReclaimable: 280464 kB' 'Slab: 1034956 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754492 kB' 'KernelStack: 26848 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8292916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234160 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.519 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.520 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.521 nr_hugepages=1024 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.521 resv_hugepages=0 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.521 surplus_hugepages=0 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.521 anon_hugepages=0 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109580648 kB' 'MemAvailable: 112948140 kB' 'Buffers: 2696 kB' 'Cached: 10496520 kB' 'SwapCached: 0 kB' 'Active: 7495776 kB' 'Inactive: 3486216 kB' 'Active(anon): 6930400 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486336 kB' 'Mapped: 202756 kB' 'Shmem: 6447624 kB' 'KReclaimable: 280464 kB' 'Slab: 1034732 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754268 kB' 'KernelStack: 27024 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8292940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234192 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.521 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.522 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53781788 kB' 'MemUsed: 11877220 kB' 'SwapCached: 0 kB' 'Active: 5087776 kB' 'Inactive: 3253748 kB' 'Active(anon): 4673308 kB' 'Inactive(anon): 0 kB' 'Active(file): 414468 kB' 'Inactive(file): 3253748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8176456 kB' 'Mapped: 118216 kB' 'AnonPages: 168480 kB' 'Shmem: 4508240 kB' 'KernelStack: 13864 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201776 kB' 'Slab: 667544 kB' 'SReclaimable: 201776 kB' 'SUnreclaim: 465768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.523 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679876 kB' 'MemFree: 55801832 kB' 'MemUsed: 4878044 kB' 'SwapCached: 0 kB' 'Active: 2408228 kB' 'Inactive: 232468 kB' 'Active(anon): 2257320 kB' 'Inactive(anon): 0 kB' 'Active(file): 150908 kB' 'Inactive(file): 232468 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2322780 kB' 'Mapped: 84540 kB' 'AnonPages: 318112 kB' 'Shmem: 1939404 kB' 'KernelStack: 13160 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78688 kB' 'Slab: 367156 kB' 'SReclaimable: 78688 kB' 'SUnreclaim: 288468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.524 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:11.525 node0=512 expecting 512 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.525 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.526 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:11.526 node1=512 expecting 512 00:03:11.526 16:47:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:11.526 00:03:11.526 real 0m3.654s 00:03:11.526 user 0m1.421s 00:03:11.526 sys 0m2.266s 00:03:11.526 16:47:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:11.526 16:47:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:11.526 ************************************ 00:03:11.526 END TEST even_2G_alloc 00:03:11.526 ************************************ 00:03:11.526 16:47:50 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:11.526 16:47:50 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:11.526 16:47:50 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:11.526 16:47:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.526 ************************************ 00:03:11.526 START TEST odd_alloc 00:03:11.526 ************************************ 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.526 16:47:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.832 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:14.832 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:14.832 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109544424 kB' 'MemAvailable: 112911916 kB' 'Buffers: 2696 kB' 'Cached: 10496656 kB' 'SwapCached: 0 kB' 'Active: 7492788 kB' 'Inactive: 3486216 kB' 'Active(anon): 6927412 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483164 kB' 'Mapped: 202820 kB' 'Shmem: 6447760 kB' 'KReclaimable: 280464 kB' 'Slab: 1035148 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754684 kB' 'KernelStack: 26800 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 8290672 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234176 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.099 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.100 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109544916 kB' 'MemAvailable: 112912408 kB' 'Buffers: 2696 kB' 'Cached: 10496660 kB' 'SwapCached: 0 kB' 'Active: 7492676 kB' 'Inactive: 3486216 kB' 'Active(anon): 6927300 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483080 kB' 'Mapped: 202780 kB' 'Shmem: 6447764 kB' 'KReclaimable: 280464 kB' 'Slab: 1035128 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754664 kB' 'KernelStack: 26800 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 8290688 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234144 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.101 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109545676 kB' 'MemAvailable: 112913168 kB' 'Buffers: 2696 kB' 'Cached: 10496660 kB' 'SwapCached: 0 kB' 'Active: 7492448 kB' 'Inactive: 3486216 kB' 'Active(anon): 6927072 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482764 kB' 'Mapped: 202700 kB' 'Shmem: 6447764 kB' 'KReclaimable: 280464 kB' 'Slab: 1035112 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754648 kB' 'KernelStack: 26848 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 8290712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234160 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.102 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.103 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:15.104 nr_hugepages=1025 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.104 resv_hugepages=0 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.104 surplus_hugepages=0 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.104 anon_hugepages=0 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109546356 kB' 'MemAvailable: 112913848 kB' 'Buffers: 2696 kB' 'Cached: 10496692 kB' 'SwapCached: 0 kB' 'Active: 7492168 kB' 'Inactive: 3486216 kB' 'Active(anon): 6926792 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482420 kB' 'Mapped: 202700 kB' 'Shmem: 6447796 kB' 'KReclaimable: 280464 kB' 'Slab: 1035112 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754648 kB' 'KernelStack: 26848 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 8290732 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234160 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.104 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.105 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53767364 kB' 'MemUsed: 11891644 kB' 'SwapCached: 0 kB' 'Active: 5087464 kB' 'Inactive: 3253748 kB' 'Active(anon): 4672996 kB' 'Inactive(anon): 0 kB' 'Active(file): 414468 kB' 'Inactive(file): 3253748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8176564 kB' 'Mapped: 118156 kB' 'AnonPages: 167932 kB' 'Shmem: 4508348 kB' 'KernelStack: 13880 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201776 kB' 'Slab: 668108 kB' 'SReclaimable: 201776 kB' 'SUnreclaim: 466332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.106 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.107 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679876 kB' 'MemFree: 55778820 kB' 'MemUsed: 4901056 kB' 'SwapCached: 0 kB' 'Active: 2405064 kB' 'Inactive: 232468 kB' 'Active(anon): 2254156 kB' 'Inactive(anon): 0 kB' 'Active(file): 150908 kB' 'Inactive(file): 232468 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2322868 kB' 'Mapped: 84544 kB' 'AnonPages: 314824 kB' 'Shmem: 1939492 kB' 'KernelStack: 12968 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78688 kB' 'Slab: 367004 kB' 'SReclaimable: 78688 kB' 'SUnreclaim: 288316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.108 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:15.109 node0=512 expecting 513 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:15.109 node1=513 expecting 512 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:15.109 00:03:15.109 real 0m3.562s 00:03:15.109 user 0m1.387s 00:03:15.109 sys 0m2.180s 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:15.109 16:47:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:15.109 ************************************ 00:03:15.109 END TEST odd_alloc 00:03:15.109 ************************************ 00:03:15.109 16:47:53 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:15.109 16:47:53 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:15.109 16:47:53 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:15.109 16:47:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.371 ************************************ 00:03:15.371 START TEST custom_alloc 00:03:15.371 ************************************ 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.371 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.372 16:47:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.674 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:18.674 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:18.674 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:18.674 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:18.674 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 108499932 kB' 'MemAvailable: 111867424 kB' 'Buffers: 2696 kB' 'Cached: 10496832 kB' 'SwapCached: 0 kB' 'Active: 7493828 kB' 'Inactive: 3486216 kB' 'Active(anon): 6928452 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483888 kB' 'Mapped: 202744 kB' 'Shmem: 6447936 kB' 'KReclaimable: 280464 kB' 'Slab: 1034972 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754508 kB' 'KernelStack: 26880 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 8291636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234272 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.942 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.943 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 108500536 kB' 'MemAvailable: 111868028 kB' 'Buffers: 2696 kB' 'Cached: 10496836 kB' 'SwapCached: 0 kB' 'Active: 7493504 kB' 'Inactive: 3486216 kB' 'Active(anon): 6928128 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483516 kB' 'Mapped: 202720 kB' 'Shmem: 6447940 kB' 'KReclaimable: 280464 kB' 'Slab: 1035016 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754552 kB' 'KernelStack: 26848 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 8291652 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234256 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.944 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.945 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 108500856 kB' 'MemAvailable: 111868348 kB' 'Buffers: 2696 kB' 'Cached: 10496852 kB' 'SwapCached: 0 kB' 'Active: 7493520 kB' 'Inactive: 3486216 kB' 'Active(anon): 6928144 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483516 kB' 'Mapped: 202720 kB' 'Shmem: 6447956 kB' 'KReclaimable: 280464 kB' 'Slab: 1035016 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754552 kB' 'KernelStack: 26848 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 8291676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234256 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.946 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.947 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:18.948 nr_hugepages=1536 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.948 resv_hugepages=0 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.948 surplus_hugepages=0 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.948 anon_hugepages=0 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 108500100 kB' 'MemAvailable: 111867592 kB' 'Buffers: 2696 kB' 'Cached: 10496852 kB' 'SwapCached: 0 kB' 'Active: 7493520 kB' 'Inactive: 3486216 kB' 'Active(anon): 6928144 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483516 kB' 'Mapped: 202720 kB' 'Shmem: 6447956 kB' 'KReclaimable: 280464 kB' 'Slab: 1035016 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754552 kB' 'KernelStack: 26848 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 8291696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234256 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.948 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.949 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53759964 kB' 'MemUsed: 11899044 kB' 'SwapCached: 0 kB' 'Active: 5088752 kB' 'Inactive: 3253748 kB' 'Active(anon): 4674284 kB' 'Inactive(anon): 0 kB' 'Active(file): 414468 kB' 'Inactive(file): 3253748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8176620 kB' 'Mapped: 118156 kB' 'AnonPages: 169076 kB' 'Shmem: 4508404 kB' 'KernelStack: 13912 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201776 kB' 'Slab: 668060 kB' 'SReclaimable: 201776 kB' 'SUnreclaim: 466284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.950 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.951 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679876 kB' 'MemFree: 54740136 kB' 'MemUsed: 5939740 kB' 'SwapCached: 0 kB' 'Active: 2404672 kB' 'Inactive: 232468 kB' 'Active(anon): 2253764 kB' 'Inactive(anon): 0 kB' 'Active(file): 150908 kB' 'Inactive(file): 232468 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2322988 kB' 'Mapped: 84564 kB' 'AnonPages: 314240 kB' 'Shmem: 1939612 kB' 'KernelStack: 12920 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78688 kB' 'Slab: 366956 kB' 'SReclaimable: 78688 kB' 'SUnreclaim: 288268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.952 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.953 node0=512 expecting 512 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:18.953 node1=1024 expecting 1024 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:18.953 00:03:18.953 real 0m3.741s 00:03:18.953 user 0m1.523s 00:03:18.953 sys 0m2.263s 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:18.953 16:47:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.953 ************************************ 00:03:18.953 END TEST custom_alloc 00:03:18.953 ************************************ 00:03:18.953 16:47:57 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:18.953 16:47:57 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:18.953 16:47:57 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:18.953 16:47:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.214 ************************************ 00:03:19.214 START TEST no_shrink_alloc 00:03:19.214 ************************************ 00:03:19.214 16:47:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:19.214 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:19.214 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.214 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.215 16:47:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.520 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:22.520 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:22.520 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109470116 kB' 'MemAvailable: 112837608 kB' 'Buffers: 2696 kB' 'Cached: 10497000 kB' 'SwapCached: 0 kB' 'Active: 7501492 kB' 'Inactive: 3486216 kB' 'Active(anon): 6936116 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490600 kB' 'Mapped: 203680 kB' 'Shmem: 6448104 kB' 'KReclaimable: 280464 kB' 'Slab: 1034432 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 753968 kB' 'KernelStack: 26976 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8303380 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234228 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.791 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109473644 kB' 'MemAvailable: 112841136 kB' 'Buffers: 2696 kB' 'Cached: 10497004 kB' 'SwapCached: 0 kB' 'Active: 7497092 kB' 'Inactive: 3486216 kB' 'Active(anon): 6931716 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486712 kB' 'Mapped: 203568 kB' 'Shmem: 6448108 kB' 'KReclaimable: 280464 kB' 'Slab: 1034412 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 753948 kB' 'KernelStack: 26992 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8319376 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234208 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.792 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.793 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109471180 kB' 'MemAvailable: 112838672 kB' 'Buffers: 2696 kB' 'Cached: 10497004 kB' 'SwapCached: 0 kB' 'Active: 7500656 kB' 'Inactive: 3486216 kB' 'Active(anon): 6935280 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489608 kB' 'Mapped: 203176 kB' 'Shmem: 6448108 kB' 'KReclaimable: 280464 kB' 'Slab: 1034420 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 753956 kB' 'KernelStack: 26992 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8321776 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234192 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.794 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.795 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.796 nr_hugepages=1024 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.796 resv_hugepages=0 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.796 surplus_hugepages=0 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.796 anon_hugepages=0 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.796 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109467444 kB' 'MemAvailable: 112834936 kB' 'Buffers: 2696 kB' 'Cached: 10497040 kB' 'SwapCached: 0 kB' 'Active: 7501452 kB' 'Inactive: 3486216 kB' 'Active(anon): 6936076 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491340 kB' 'Mapped: 203652 kB' 'Shmem: 6448144 kB' 'KReclaimable: 280464 kB' 'Slab: 1034396 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 753932 kB' 'KernelStack: 26928 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8303072 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234180 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.797 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.798 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52635104 kB' 'MemUsed: 13023904 kB' 'SwapCached: 0 kB' 'Active: 5089120 kB' 'Inactive: 3253748 kB' 'Active(anon): 4674652 kB' 'Inactive(anon): 0 kB' 'Active(file): 414468 kB' 'Inactive(file): 3253748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8176656 kB' 'Mapped: 118692 kB' 'AnonPages: 169576 kB' 'Shmem: 4508440 kB' 'KernelStack: 13976 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201776 kB' 'Slab: 667772 kB' 'SReclaimable: 201776 kB' 'SUnreclaim: 465996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.799 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.800 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.801 node0=1024 expecting 1024 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.801 16:48:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.141 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:26.141 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:26.141 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:26.406 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109480584 kB' 'MemAvailable: 112848076 kB' 'Buffers: 2696 kB' 'Cached: 10497168 kB' 'SwapCached: 0 kB' 'Active: 7503360 kB' 'Inactive: 3486216 kB' 'Active(anon): 6937984 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492468 kB' 'Mapped: 203808 kB' 'Shmem: 6448272 kB' 'KReclaimable: 280464 kB' 'Slab: 1035116 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754652 kB' 'KernelStack: 26944 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8305444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234212 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.406 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.407 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109479504 kB' 'MemAvailable: 112846996 kB' 'Buffers: 2696 kB' 'Cached: 10497172 kB' 'SwapCached: 0 kB' 'Active: 7503508 kB' 'Inactive: 3486216 kB' 'Active(anon): 6938132 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492676 kB' 'Mapped: 203780 kB' 'Shmem: 6448276 kB' 'KReclaimable: 280464 kB' 'Slab: 1035116 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754652 kB' 'KernelStack: 27136 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8305440 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234276 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.408 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.409 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109479628 kB' 'MemAvailable: 112847120 kB' 'Buffers: 2696 kB' 'Cached: 10497188 kB' 'SwapCached: 0 kB' 'Active: 7503716 kB' 'Inactive: 3486216 kB' 'Active(anon): 6938340 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493388 kB' 'Mapped: 203704 kB' 'Shmem: 6448292 kB' 'KReclaimable: 280464 kB' 'Slab: 1035092 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754628 kB' 'KernelStack: 27088 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8305616 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234276 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.410 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.411 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.412 nr_hugepages=1024 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.412 resv_hugepages=0 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.412 surplus_hugepages=0 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.412 anon_hugepages=0 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.412 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338884 kB' 'MemFree: 109480460 kB' 'MemAvailable: 112847952 kB' 'Buffers: 2696 kB' 'Cached: 10497212 kB' 'SwapCached: 0 kB' 'Active: 7502996 kB' 'Inactive: 3486216 kB' 'Active(anon): 6937620 kB' 'Inactive(anon): 0 kB' 'Active(file): 565376 kB' 'Inactive(file): 3486216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492588 kB' 'Mapped: 203704 kB' 'Shmem: 6448316 kB' 'KReclaimable: 280464 kB' 'Slab: 1035068 kB' 'SReclaimable: 280464 kB' 'SUnreclaim: 754604 kB' 'KernelStack: 27168 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 8303912 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234260 kB' 'VmallocChunk: 0 kB' 'Percpu: 105984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3554676 kB' 'DirectMap2M: 19193856 kB' 'DirectMap1G: 113246208 kB' 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.676 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.677 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52652372 kB' 'MemUsed: 13006636 kB' 'SwapCached: 0 kB' 'Active: 5088808 kB' 'Inactive: 3253748 kB' 'Active(anon): 4674340 kB' 'Inactive(anon): 0 kB' 'Active(file): 414468 kB' 'Inactive(file): 3253748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8176684 kB' 'Mapped: 118928 kB' 'AnonPages: 169020 kB' 'Shmem: 4508468 kB' 'KernelStack: 14072 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 201776 kB' 'Slab: 668016 kB' 'SReclaimable: 201776 kB' 'SUnreclaim: 466240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.678 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.679 node0=1024 expecting 1024 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.679 00:03:26.679 real 0m7.524s 00:03:26.679 user 0m3.019s 00:03:26.679 sys 0m4.607s 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:26.679 16:48:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.679 ************************************ 00:03:26.679 END TEST no_shrink_alloc 00:03:26.679 ************************************ 00:03:26.679 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:26.679 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:26.679 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.679 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.679 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.679 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.679 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.680 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.680 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.680 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.680 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.680 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.680 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:26.680 16:48:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:26.680 00:03:26.680 real 0m26.681s 00:03:26.680 user 0m10.533s 00:03:26.680 sys 0m16.403s 00:03:26.680 16:48:05 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:26.680 16:48:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.680 ************************************ 00:03:26.680 END TEST hugepages 00:03:26.680 ************************************ 00:03:26.680 16:48:05 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:26.680 16:48:05 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:26.680 16:48:05 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:26.680 16:48:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:26.680 ************************************ 00:03:26.680 START TEST driver 00:03:26.680 ************************************ 00:03:26.680 16:48:05 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:26.940 * Looking for test storage... 00:03:26.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:26.940 16:48:05 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:26.940 16:48:05 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.940 16:48:05 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.227 16:48:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:32.227 16:48:10 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:32.227 16:48:10 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:32.227 16:48:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:32.227 ************************************ 00:03:32.227 START TEST guess_driver 00:03:32.227 ************************************ 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:32.227 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:32.227 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:32.227 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:32.227 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:32.228 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:32.228 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:32.228 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:32.228 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:32.228 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:32.228 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:32.228 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:32.228 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:32.228 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:32.228 Looking for driver=vfio-pci 00:03:32.228 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.228 16:48:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:32.228 16:48:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.228 16:48:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.769 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.031 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:35.031 16:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:35.031 16:48:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.031 16:48:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.314 00:03:40.314 real 0m8.572s 00:03:40.314 user 0m2.786s 00:03:40.314 sys 0m5.009s 00:03:40.314 16:48:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:40.314 16:48:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:40.314 ************************************ 00:03:40.314 END TEST guess_driver 00:03:40.314 ************************************ 00:03:40.314 00:03:40.314 real 0m13.232s 00:03:40.314 user 0m4.030s 00:03:40.314 sys 0m7.541s 00:03:40.314 16:48:18 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:40.314 16:48:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:40.314 ************************************ 00:03:40.314 END TEST driver 00:03:40.314 ************************************ 00:03:40.314 16:48:18 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:40.314 16:48:18 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:40.314 16:48:18 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:40.314 16:48:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.314 ************************************ 00:03:40.314 START TEST devices 00:03:40.314 ************************************ 00:03:40.314 16:48:18 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:40.314 * Looking for test storage... 00:03:40.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.314 16:48:18 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:40.314 16:48:18 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:40.314 16:48:18 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.314 16:48:18 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.514 16:48:22 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:44.514 16:48:22 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:44.514 16:48:22 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:44.514 16:48:22 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:44.514 16:48:22 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:44.514 16:48:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:44.514 16:48:22 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:44.514 16:48:22 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.514 16:48:22 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:44.514 16:48:22 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:44.514 16:48:22 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:44.514 16:48:22 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:44.514 16:48:22 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:44.514 16:48:22 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:44.514 16:48:22 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:44.515 16:48:22 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:44.515 16:48:22 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:44.515 No valid GPT data, bailing 00:03:44.515 16:48:22 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:44.515 16:48:22 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:44.515 16:48:22 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:44.515 16:48:22 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:44.515 16:48:22 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:44.515 16:48:22 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:44.515 16:48:22 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:44.515 16:48:22 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:44.515 16:48:22 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.515 16:48:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:44.515 ************************************ 00:03:44.515 START TEST nvme_mount 00:03:44.515 ************************************ 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:44.515 16:48:22 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:45.085 Creating new GPT entries in memory. 00:03:45.085 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:45.085 other utilities. 00:03:45.085 16:48:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:45.085 16:48:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.085 16:48:23 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.085 16:48:23 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.085 16:48:23 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:46.467 Creating new GPT entries in memory. 00:03:46.467 The operation has completed successfully. 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1225717 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.467 16:48:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.768 16:48:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.768 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:49.769 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.769 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:49.769 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.769 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.769 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.769 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:49.769 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.769 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.769 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.029 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:50.029 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:03:50.029 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.029 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.030 16:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.339 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:53.340 16:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.601 16:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:03:56.903 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.163 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.163 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:57.163 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:57.163 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:57.163 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.163 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:57.163 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:57.163 16:48:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:57.163 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:57.163 00:03:57.163 real 0m13.075s 00:03:57.163 user 0m3.988s 00:03:57.163 sys 0m6.935s 00:03:57.163 16:48:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:57.164 16:48:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:57.164 ************************************ 00:03:57.164 END TEST nvme_mount 00:03:57.164 ************************************ 00:03:57.164 16:48:35 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:57.164 16:48:35 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:57.164 16:48:35 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:57.164 16:48:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:57.164 ************************************ 00:03:57.164 START TEST dm_mount 00:03:57.164 ************************************ 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:57.164 16:48:35 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:58.544 Creating new GPT entries in memory. 00:03:58.544 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:58.544 other utilities. 00:03:58.544 16:48:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:58.544 16:48:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.544 16:48:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.544 16:48:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.544 16:48:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:59.485 Creating new GPT entries in memory. 00:03:59.485 The operation has completed successfully. 00:03:59.485 16:48:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.485 16:48:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.485 16:48:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.485 16:48:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.485 16:48:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:00.457 The operation has completed successfully. 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1230589 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:00.457 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.458 16:48:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.024 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.284 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:03.285 16:48:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.545 16:48:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:06.842 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:07.102 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:07.102 00:04:07.102 real 0m9.925s 00:04:07.102 user 0m2.492s 00:04:07.102 sys 0m4.347s 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:07.102 16:48:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:07.102 ************************************ 00:04:07.102 END TEST dm_mount 00:04:07.102 ************************************ 00:04:07.362 16:48:45 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:07.362 16:48:45 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:07.362 16:48:45 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.362 16:48:45 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.362 16:48:45 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:07.362 16:48:45 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.362 16:48:45 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:07.623 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:07.623 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:07.623 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:07.623 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:07.623 16:48:46 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:07.623 16:48:46 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.623 16:48:46 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:07.623 16:48:46 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.623 16:48:46 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:07.623 16:48:46 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.623 16:48:46 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:07.623 00:04:07.623 real 0m27.490s 00:04:07.623 user 0m8.073s 00:04:07.623 sys 0m14.021s 00:04:07.623 16:48:46 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:07.623 16:48:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:07.623 ************************************ 00:04:07.623 END TEST devices 00:04:07.623 ************************************ 00:04:07.623 00:04:07.623 real 1m32.892s 00:04:07.623 user 0m30.834s 00:04:07.623 sys 0m52.917s 00:04:07.623 16:48:46 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:07.623 16:48:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.623 ************************************ 00:04:07.623 END TEST setup.sh 00:04:07.623 ************************************ 00:04:07.623 16:48:46 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:10.169 Hugepages 00:04:10.169 node hugesize free / total 00:04:10.169 node0 1048576kB 0 / 0 00:04:10.169 node0 2048kB 2048 / 2048 00:04:10.169 node1 1048576kB 0 / 0 00:04:10.169 node1 2048kB 0 / 0 00:04:10.169 00:04:10.169 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:10.169 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:10.169 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:10.169 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:10.169 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:10.169 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:10.169 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:10.169 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:10.169 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:10.430 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:10.430 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:10.430 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:10.430 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:10.430 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:10.430 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:10.430 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:10.430 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:10.430 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:10.430 16:48:49 -- spdk/autotest.sh@130 -- # uname -s 00:04:10.430 16:48:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:10.430 16:48:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:10.430 16:48:49 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.735 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:13.735 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:15.117 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:15.689 16:48:54 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:16.632 16:48:55 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:16.632 16:48:55 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:16.632 16:48:55 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:16.632 16:48:55 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:16.632 16:48:55 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:16.632 16:48:55 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:16.632 16:48:55 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.632 16:48:55 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:16.632 16:48:55 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:16.632 16:48:55 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:16.632 16:48:55 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:04:16.632 16:48:55 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.937 Waiting for block devices as requested 00:04:19.937 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:19.937 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:19.937 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:20.198 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:20.198 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:20.198 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:20.458 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:20.458 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:20.458 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:20.719 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:20.719 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:20.719 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:20.979 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:20.979 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:20.979 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:20.979 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:21.238 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:21.500 16:49:00 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:21.500 16:49:00 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:21.500 16:49:00 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:21.500 16:49:00 -- common/autotest_common.sh@1498 -- # grep 0000:65:00.0/nvme/nvme 00:04:21.500 16:49:00 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:21.500 16:49:00 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:21.500 16:49:00 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:21.500 16:49:00 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:21.500 16:49:00 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:21.500 16:49:00 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:21.500 16:49:00 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:21.500 16:49:00 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:21.500 16:49:00 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:21.500 16:49:00 -- common/autotest_common.sh@1541 -- # oacs=' 0x5f' 00:04:21.500 16:49:00 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:21.500 16:49:00 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:21.500 16:49:00 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:21.500 16:49:00 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:21.500 16:49:00 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:21.500 16:49:00 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:21.500 16:49:00 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:21.500 16:49:00 -- common/autotest_common.sh@1553 -- # continue 00:04:21.500 16:49:00 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:21.500 16:49:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.500 16:49:00 -- common/autotest_common.sh@10 -- # set +x 00:04:21.500 16:49:00 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:21.500 16:49:00 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:21.500 16:49:00 -- common/autotest_common.sh@10 -- # set +x 00:04:21.500 16:49:00 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.804 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:24.804 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:25.065 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:25.065 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:25.327 16:49:03 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:25.327 16:49:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.327 16:49:03 -- common/autotest_common.sh@10 -- # set +x 00:04:25.327 16:49:04 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:25.327 16:49:04 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:25.327 16:49:04 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.327 16:49:04 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:25.327 16:49:04 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:25.327 16:49:04 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:25.327 16:49:04 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:25.327 16:49:04 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:25.327 16:49:04 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.327 16:49:04 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:25.327 16:49:04 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:25.327 16:49:04 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:25.327 16:49:04 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:04:25.327 16:49:04 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:25.327 16:49:04 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:25.327 16:49:04 -- common/autotest_common.sh@1576 -- # device=0xa80a 00:04:25.327 16:49:04 -- common/autotest_common.sh@1577 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:25.327 16:49:04 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:04:25.327 16:49:04 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:04:25.327 16:49:04 -- common/autotest_common.sh@1589 -- # return 0 00:04:25.327 16:49:04 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:25.327 16:49:04 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:25.327 16:49:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:25.327 16:49:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:25.327 16:49:04 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:25.327 16:49:04 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:25.327 16:49:04 -- common/autotest_common.sh@10 -- # set +x 00:04:25.327 16:49:04 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:25.327 16:49:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.327 16:49:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.327 16:49:04 -- common/autotest_common.sh@10 -- # set +x 00:04:25.588 ************************************ 00:04:25.588 START TEST env 00:04:25.588 ************************************ 00:04:25.588 16:49:04 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:25.588 * Looking for test storage... 00:04:25.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:25.588 16:49:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:25.588 16:49:04 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.588 16:49:04 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.588 16:49:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.588 ************************************ 00:04:25.588 START TEST env_memory 00:04:25.588 ************************************ 00:04:25.588 16:49:04 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:25.588 00:04:25.588 00:04:25.588 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.588 http://cunit.sourceforge.net/ 00:04:25.588 00:04:25.588 00:04:25.588 Suite: memory 00:04:25.588 Test: alloc and free memory map ...[2024-05-15 16:49:04.369315] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:25.588 passed 00:04:25.589 Test: mem map translation ...[2024-05-15 16:49:04.394619] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:25.589 [2024-05-15 16:49:04.394637] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:25.589 [2024-05-15 16:49:04.394683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:25.589 [2024-05-15 16:49:04.394690] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:25.850 passed 00:04:25.850 Test: mem map registration ...[2024-05-15 16:49:04.449751] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:25.850 [2024-05-15 16:49:04.449768] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:25.850 passed 00:04:25.850 Test: mem map adjacent registrations ...passed 00:04:25.850 00:04:25.850 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.850 suites 1 1 n/a 0 0 00:04:25.850 tests 4 4 4 0 0 00:04:25.850 asserts 152 152 152 0 n/a 00:04:25.850 00:04:25.850 Elapsed time = 0.193 seconds 00:04:25.850 00:04:25.850 real 0m0.206s 00:04:25.850 user 0m0.195s 00:04:25.850 sys 0m0.011s 00:04:25.850 16:49:04 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.850 16:49:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:25.850 ************************************ 00:04:25.850 END TEST env_memory 00:04:25.850 ************************************ 00:04:25.850 16:49:04 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:25.850 16:49:04 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.850 16:49:04 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.850 16:49:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.850 ************************************ 00:04:25.850 START TEST env_vtophys 00:04:25.850 ************************************ 00:04:25.850 16:49:04 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:25.850 EAL: lib.eal log level changed from notice to debug 00:04:25.850 EAL: Detected lcore 0 as core 0 on socket 0 00:04:25.850 EAL: Detected lcore 1 as core 1 on socket 0 00:04:25.850 EAL: Detected lcore 2 as core 2 on socket 0 00:04:25.850 EAL: Detected lcore 3 as core 3 on socket 0 00:04:25.850 EAL: Detected lcore 4 as core 4 on socket 0 00:04:25.850 EAL: Detected lcore 5 as core 5 on socket 0 00:04:25.850 EAL: Detected lcore 6 as core 6 on socket 0 00:04:25.850 EAL: Detected lcore 7 as core 7 on socket 0 00:04:25.850 EAL: Detected lcore 8 as core 8 on socket 0 00:04:25.850 EAL: Detected lcore 9 as core 9 on socket 0 00:04:25.850 EAL: Detected lcore 10 as core 10 on socket 0 00:04:25.850 EAL: Detected lcore 11 as core 11 on socket 0 00:04:25.850 EAL: Detected lcore 12 as core 12 on socket 0 00:04:25.850 EAL: Detected lcore 13 as core 13 on socket 0 00:04:25.850 EAL: Detected lcore 14 as core 14 on socket 0 00:04:25.850 EAL: Detected lcore 15 as core 15 on socket 0 00:04:25.850 EAL: Detected lcore 16 as core 16 on socket 0 00:04:25.850 EAL: Detected lcore 17 as core 17 on socket 0 00:04:25.850 EAL: Detected lcore 18 as core 18 on socket 0 00:04:25.851 EAL: Detected lcore 19 as core 19 on socket 0 00:04:25.851 EAL: Detected lcore 20 as core 20 on socket 0 00:04:25.851 EAL: Detected lcore 21 as core 21 on socket 0 00:04:25.851 EAL: Detected lcore 22 as core 22 on socket 0 00:04:25.851 EAL: Detected lcore 23 as core 23 on socket 0 00:04:25.851 EAL: Detected lcore 24 as core 24 on socket 0 00:04:25.851 EAL: Detected lcore 25 as core 25 on socket 0 00:04:25.851 EAL: Detected lcore 26 as core 26 on socket 0 00:04:25.851 EAL: Detected lcore 27 as core 27 on socket 0 00:04:25.851 EAL: Detected lcore 28 as core 28 on socket 0 00:04:25.851 EAL: Detected lcore 29 as core 29 on socket 0 00:04:25.851 EAL: Detected lcore 30 as core 30 on socket 0 00:04:25.851 EAL: Detected lcore 31 as core 31 on socket 0 00:04:25.851 EAL: Detected lcore 32 as core 32 on socket 0 00:04:25.851 EAL: Detected lcore 33 as core 33 on socket 0 00:04:25.851 EAL: Detected lcore 34 as core 34 on socket 0 00:04:25.851 EAL: Detected lcore 35 as core 35 on socket 0 00:04:25.851 EAL: Detected lcore 36 as core 0 on socket 1 00:04:25.851 EAL: Detected lcore 37 as core 1 on socket 1 00:04:25.851 EAL: Detected lcore 38 as core 2 on socket 1 00:04:25.851 EAL: Detected lcore 39 as core 3 on socket 1 00:04:25.851 EAL: Detected lcore 40 as core 4 on socket 1 00:04:25.851 EAL: Detected lcore 41 as core 5 on socket 1 00:04:25.851 EAL: Detected lcore 42 as core 6 on socket 1 00:04:25.851 EAL: Detected lcore 43 as core 7 on socket 1 00:04:25.851 EAL: Detected lcore 44 as core 8 on socket 1 00:04:25.851 EAL: Detected lcore 45 as core 9 on socket 1 00:04:25.851 EAL: Detected lcore 46 as core 10 on socket 1 00:04:25.851 EAL: Detected lcore 47 as core 11 on socket 1 00:04:25.851 EAL: Detected lcore 48 as core 12 on socket 1 00:04:25.851 EAL: Detected lcore 49 as core 13 on socket 1 00:04:25.851 EAL: Detected lcore 50 as core 14 on socket 1 00:04:25.851 EAL: Detected lcore 51 as core 15 on socket 1 00:04:25.851 EAL: Detected lcore 52 as core 16 on socket 1 00:04:25.851 EAL: Detected lcore 53 as core 17 on socket 1 00:04:25.851 EAL: Detected lcore 54 as core 18 on socket 1 00:04:25.851 EAL: Detected lcore 55 as core 19 on socket 1 00:04:25.851 EAL: Detected lcore 56 as core 20 on socket 1 00:04:25.851 EAL: Detected lcore 57 as core 21 on socket 1 00:04:25.851 EAL: Detected lcore 58 as core 22 on socket 1 00:04:25.851 EAL: Detected lcore 59 as core 23 on socket 1 00:04:25.851 EAL: Detected lcore 60 as core 24 on socket 1 00:04:25.851 EAL: Detected lcore 61 as core 25 on socket 1 00:04:25.851 EAL: Detected lcore 62 as core 26 on socket 1 00:04:25.851 EAL: Detected lcore 63 as core 27 on socket 1 00:04:25.851 EAL: Detected lcore 64 as core 28 on socket 1 00:04:25.851 EAL: Detected lcore 65 as core 29 on socket 1 00:04:25.851 EAL: Detected lcore 66 as core 30 on socket 1 00:04:25.851 EAL: Detected lcore 67 as core 31 on socket 1 00:04:25.851 EAL: Detected lcore 68 as core 32 on socket 1 00:04:25.851 EAL: Detected lcore 69 as core 33 on socket 1 00:04:25.851 EAL: Detected lcore 70 as core 34 on socket 1 00:04:25.851 EAL: Detected lcore 71 as core 35 on socket 1 00:04:25.851 EAL: Detected lcore 72 as core 0 on socket 0 00:04:25.851 EAL: Detected lcore 73 as core 1 on socket 0 00:04:25.851 EAL: Detected lcore 74 as core 2 on socket 0 00:04:25.851 EAL: Detected lcore 75 as core 3 on socket 0 00:04:25.851 EAL: Detected lcore 76 as core 4 on socket 0 00:04:25.851 EAL: Detected lcore 77 as core 5 on socket 0 00:04:25.851 EAL: Detected lcore 78 as core 6 on socket 0 00:04:25.851 EAL: Detected lcore 79 as core 7 on socket 0 00:04:25.851 EAL: Detected lcore 80 as core 8 on socket 0 00:04:25.851 EAL: Detected lcore 81 as core 9 on socket 0 00:04:25.851 EAL: Detected lcore 82 as core 10 on socket 0 00:04:25.851 EAL: Detected lcore 83 as core 11 on socket 0 00:04:25.851 EAL: Detected lcore 84 as core 12 on socket 0 00:04:25.851 EAL: Detected lcore 85 as core 13 on socket 0 00:04:25.851 EAL: Detected lcore 86 as core 14 on socket 0 00:04:25.851 EAL: Detected lcore 87 as core 15 on socket 0 00:04:25.851 EAL: Detected lcore 88 as core 16 on socket 0 00:04:25.851 EAL: Detected lcore 89 as core 17 on socket 0 00:04:25.851 EAL: Detected lcore 90 as core 18 on socket 0 00:04:25.851 EAL: Detected lcore 91 as core 19 on socket 0 00:04:25.851 EAL: Detected lcore 92 as core 20 on socket 0 00:04:25.851 EAL: Detected lcore 93 as core 21 on socket 0 00:04:25.851 EAL: Detected lcore 94 as core 22 on socket 0 00:04:25.851 EAL: Detected lcore 95 as core 23 on socket 0 00:04:25.851 EAL: Detected lcore 96 as core 24 on socket 0 00:04:25.851 EAL: Detected lcore 97 as core 25 on socket 0 00:04:25.851 EAL: Detected lcore 98 as core 26 on socket 0 00:04:25.851 EAL: Detected lcore 99 as core 27 on socket 0 00:04:25.851 EAL: Detected lcore 100 as core 28 on socket 0 00:04:25.851 EAL: Detected lcore 101 as core 29 on socket 0 00:04:25.851 EAL: Detected lcore 102 as core 30 on socket 0 00:04:25.851 EAL: Detected lcore 103 as core 31 on socket 0 00:04:25.851 EAL: Detected lcore 104 as core 32 on socket 0 00:04:25.851 EAL: Detected lcore 105 as core 33 on socket 0 00:04:25.851 EAL: Detected lcore 106 as core 34 on socket 0 00:04:25.851 EAL: Detected lcore 107 as core 35 on socket 0 00:04:25.851 EAL: Detected lcore 108 as core 0 on socket 1 00:04:25.851 EAL: Detected lcore 109 as core 1 on socket 1 00:04:25.851 EAL: Detected lcore 110 as core 2 on socket 1 00:04:25.851 EAL: Detected lcore 111 as core 3 on socket 1 00:04:25.851 EAL: Detected lcore 112 as core 4 on socket 1 00:04:25.851 EAL: Detected lcore 113 as core 5 on socket 1 00:04:25.851 EAL: Detected lcore 114 as core 6 on socket 1 00:04:25.851 EAL: Detected lcore 115 as core 7 on socket 1 00:04:25.851 EAL: Detected lcore 116 as core 8 on socket 1 00:04:25.851 EAL: Detected lcore 117 as core 9 on socket 1 00:04:25.851 EAL: Detected lcore 118 as core 10 on socket 1 00:04:25.851 EAL: Detected lcore 119 as core 11 on socket 1 00:04:25.851 EAL: Detected lcore 120 as core 12 on socket 1 00:04:25.851 EAL: Detected lcore 121 as core 13 on socket 1 00:04:25.851 EAL: Detected lcore 122 as core 14 on socket 1 00:04:25.851 EAL: Detected lcore 123 as core 15 on socket 1 00:04:25.851 EAL: Detected lcore 124 as core 16 on socket 1 00:04:25.851 EAL: Detected lcore 125 as core 17 on socket 1 00:04:25.851 EAL: Detected lcore 126 as core 18 on socket 1 00:04:25.851 EAL: Detected lcore 127 as core 19 on socket 1 00:04:25.851 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:25.851 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:25.851 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:25.851 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:25.851 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:25.851 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:25.851 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:25.851 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:25.851 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:25.851 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:25.851 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:25.851 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:25.851 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:25.851 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:25.851 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:25.851 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:25.851 EAL: Maximum logical cores by configuration: 128 00:04:25.851 EAL: Detected CPU lcores: 128 00:04:25.851 EAL: Detected NUMA nodes: 2 00:04:25.851 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:25.851 EAL: Detected shared linkage of DPDK 00:04:25.851 EAL: No shared files mode enabled, IPC will be disabled 00:04:25.851 EAL: Bus pci wants IOVA as 'DC' 00:04:25.851 EAL: Buses did not request a specific IOVA mode. 00:04:25.851 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:25.851 EAL: Selected IOVA mode 'VA' 00:04:25.851 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.851 EAL: Probing VFIO support... 00:04:25.851 EAL: IOMMU type 1 (Type 1) is supported 00:04:25.851 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:25.851 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:25.851 EAL: VFIO support initialized 00:04:25.851 EAL: Ask a virtual area of 0x2e000 bytes 00:04:25.851 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:25.851 EAL: Setting up physically contiguous memory... 00:04:25.851 EAL: Setting maximum number of open files to 524288 00:04:25.851 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:25.851 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:25.851 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:25.851 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.851 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:25.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.851 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.851 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:25.851 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:25.851 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.851 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:25.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.851 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.851 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:25.851 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:25.851 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.851 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:25.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.851 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.851 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:25.851 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:25.851 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.851 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:25.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.851 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.851 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:25.851 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:25.851 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:25.851 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.851 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:25.851 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.851 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.851 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:25.851 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:25.851 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.851 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:25.851 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.851 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.851 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:25.851 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:25.852 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.852 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:25.852 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.852 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.852 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:25.852 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:25.852 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.852 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:25.852 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.852 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.852 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:25.852 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:25.852 EAL: Hugepages will be freed exactly as allocated. 00:04:25.852 EAL: No shared files mode enabled, IPC is disabled 00:04:25.852 EAL: No shared files mode enabled, IPC is disabled 00:04:25.852 EAL: TSC frequency is ~2400000 KHz 00:04:25.852 EAL: Main lcore 0 is ready (tid=7fce3c5afa00;cpuset=[0]) 00:04:25.852 EAL: Trying to obtain current memory policy. 00:04:25.852 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.852 EAL: Restoring previous memory policy: 0 00:04:25.852 EAL: request: mp_malloc_sync 00:04:25.852 EAL: No shared files mode enabled, IPC is disabled 00:04:25.852 EAL: Heap on socket 0 was expanded by 2MB 00:04:25.852 EAL: No shared files mode enabled, IPC is disabled 00:04:26.112 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:26.112 EAL: Mem event callback 'spdk:(nil)' registered 00:04:26.112 00:04:26.112 00:04:26.112 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.112 http://cunit.sourceforge.net/ 00:04:26.113 00:04:26.113 00:04:26.113 Suite: components_suite 00:04:26.113 Test: vtophys_malloc_test ...passed 00:04:26.113 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:26.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.113 EAL: Restoring previous memory policy: 4 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was expanded by 4MB 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was shrunk by 4MB 00:04:26.113 EAL: Trying to obtain current memory policy. 00:04:26.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.113 EAL: Restoring previous memory policy: 4 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was expanded by 6MB 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was shrunk by 6MB 00:04:26.113 EAL: Trying to obtain current memory policy. 00:04:26.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.113 EAL: Restoring previous memory policy: 4 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was expanded by 10MB 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was shrunk by 10MB 00:04:26.113 EAL: Trying to obtain current memory policy. 00:04:26.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.113 EAL: Restoring previous memory policy: 4 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was expanded by 18MB 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was shrunk by 18MB 00:04:26.113 EAL: Trying to obtain current memory policy. 00:04:26.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.113 EAL: Restoring previous memory policy: 4 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was expanded by 34MB 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was shrunk by 34MB 00:04:26.113 EAL: Trying to obtain current memory policy. 00:04:26.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.113 EAL: Restoring previous memory policy: 4 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was expanded by 66MB 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was shrunk by 66MB 00:04:26.113 EAL: Trying to obtain current memory policy. 00:04:26.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.113 EAL: Restoring previous memory policy: 4 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was expanded by 130MB 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was shrunk by 130MB 00:04:26.113 EAL: Trying to obtain current memory policy. 00:04:26.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.113 EAL: Restoring previous memory policy: 4 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was expanded by 258MB 00:04:26.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.113 EAL: request: mp_malloc_sync 00:04:26.113 EAL: No shared files mode enabled, IPC is disabled 00:04:26.113 EAL: Heap on socket 0 was shrunk by 258MB 00:04:26.113 EAL: Trying to obtain current memory policy. 00:04:26.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.374 EAL: Restoring previous memory policy: 4 00:04:26.374 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.374 EAL: request: mp_malloc_sync 00:04:26.374 EAL: No shared files mode enabled, IPC is disabled 00:04:26.374 EAL: Heap on socket 0 was expanded by 514MB 00:04:26.374 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.374 EAL: request: mp_malloc_sync 00:04:26.374 EAL: No shared files mode enabled, IPC is disabled 00:04:26.374 EAL: Heap on socket 0 was shrunk by 514MB 00:04:26.374 EAL: Trying to obtain current memory policy. 00:04:26.374 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.374 EAL: Restoring previous memory policy: 4 00:04:26.374 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.374 EAL: request: mp_malloc_sync 00:04:26.374 EAL: No shared files mode enabled, IPC is disabled 00:04:26.374 EAL: Heap on socket 0 was expanded by 1026MB 00:04:26.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.636 EAL: request: mp_malloc_sync 00:04:26.636 EAL: No shared files mode enabled, IPC is disabled 00:04:26.636 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:26.636 passed 00:04:26.636 00:04:26.636 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.636 suites 1 1 n/a 0 0 00:04:26.636 tests 2 2 2 0 0 00:04:26.636 asserts 497 497 497 0 n/a 00:04:26.636 00:04:26.636 Elapsed time = 0.663 seconds 00:04:26.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.636 EAL: request: mp_malloc_sync 00:04:26.636 EAL: No shared files mode enabled, IPC is disabled 00:04:26.636 EAL: Heap on socket 0 was shrunk by 2MB 00:04:26.636 EAL: No shared files mode enabled, IPC is disabled 00:04:26.636 EAL: No shared files mode enabled, IPC is disabled 00:04:26.636 EAL: No shared files mode enabled, IPC is disabled 00:04:26.636 00:04:26.636 real 0m0.796s 00:04:26.636 user 0m0.414s 00:04:26.636 sys 0m0.346s 00:04:26.636 16:49:05 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.636 16:49:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:26.636 ************************************ 00:04:26.636 END TEST env_vtophys 00:04:26.636 ************************************ 00:04:26.636 16:49:05 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:26.636 16:49:05 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:26.636 16:49:05 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:26.636 16:49:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.896 ************************************ 00:04:26.896 START TEST env_pci 00:04:26.896 ************************************ 00:04:26.896 16:49:05 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:26.896 00:04:26.896 00:04:26.896 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.896 http://cunit.sourceforge.net/ 00:04:26.896 00:04:26.896 00:04:26.896 Suite: pci 00:04:26.896 Test: pci_hook ...[2024-05-15 16:49:05.498384] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1241623 has claimed it 00:04:26.896 EAL: Cannot find device (10000:00:01.0) 00:04:26.896 EAL: Failed to attach device on primary process 00:04:26.896 passed 00:04:26.896 00:04:26.896 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.897 suites 1 1 n/a 0 0 00:04:26.897 tests 1 1 1 0 0 00:04:26.897 asserts 25 25 25 0 n/a 00:04:26.897 00:04:26.897 Elapsed time = 0.029 seconds 00:04:26.897 00:04:26.897 real 0m0.049s 00:04:26.897 user 0m0.019s 00:04:26.897 sys 0m0.030s 00:04:26.897 16:49:05 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.897 16:49:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:26.897 ************************************ 00:04:26.897 END TEST env_pci 00:04:26.897 ************************************ 00:04:26.897 16:49:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:26.897 16:49:05 env -- env/env.sh@15 -- # uname 00:04:26.897 16:49:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:26.897 16:49:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:26.897 16:49:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.897 16:49:05 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:26.897 16:49:05 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:26.897 16:49:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.897 ************************************ 00:04:26.897 START TEST env_dpdk_post_init 00:04:26.897 ************************************ 00:04:26.897 16:49:05 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.897 EAL: Detected CPU lcores: 128 00:04:26.897 EAL: Detected NUMA nodes: 2 00:04:26.897 EAL: Detected shared linkage of DPDK 00:04:26.897 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.897 EAL: Selected IOVA mode 'VA' 00:04:26.897 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.897 EAL: VFIO support initialized 00:04:26.897 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.897 EAL: Using IOMMU type 1 (Type 1) 00:04:27.158 EAL: Ignore mapping IO port bar(1) 00:04:27.158 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:27.419 EAL: Ignore mapping IO port bar(1) 00:04:27.419 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:27.680 EAL: Ignore mapping IO port bar(1) 00:04:27.680 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:27.680 EAL: Ignore mapping IO port bar(1) 00:04:27.940 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:27.940 EAL: Ignore mapping IO port bar(1) 00:04:28.201 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:28.201 EAL: Ignore mapping IO port bar(1) 00:04:28.462 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:28.462 EAL: Ignore mapping IO port bar(1) 00:04:28.462 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:28.724 EAL: Ignore mapping IO port bar(1) 00:04:28.724 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:28.985 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:29.246 EAL: Ignore mapping IO port bar(1) 00:04:29.246 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:29.246 EAL: Ignore mapping IO port bar(1) 00:04:29.555 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:29.555 EAL: Ignore mapping IO port bar(1) 00:04:29.847 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:29.847 EAL: Ignore mapping IO port bar(1) 00:04:29.847 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:30.109 EAL: Ignore mapping IO port bar(1) 00:04:30.109 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:30.370 EAL: Ignore mapping IO port bar(1) 00:04:30.370 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:30.370 EAL: Ignore mapping IO port bar(1) 00:04:30.703 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:30.703 EAL: Ignore mapping IO port bar(1) 00:04:30.703 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:30.962 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:30.962 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:30.962 Starting DPDK initialization... 00:04:30.962 Starting SPDK post initialization... 00:04:30.962 SPDK NVMe probe 00:04:30.962 Attaching to 0000:65:00.0 00:04:30.962 Attached to 0000:65:00.0 00:04:30.962 Cleaning up... 00:04:32.874 00:04:32.874 real 0m5.705s 00:04:32.874 user 0m0.174s 00:04:32.874 sys 0m0.078s 00:04:32.874 16:49:11 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.874 16:49:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.874 ************************************ 00:04:32.874 END TEST env_dpdk_post_init 00:04:32.874 ************************************ 00:04:32.874 16:49:11 env -- env/env.sh@26 -- # uname 00:04:32.874 16:49:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.874 16:49:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.874 16:49:11 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.874 16:49:11 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.874 16:49:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.874 ************************************ 00:04:32.874 START TEST env_mem_callbacks 00:04:32.874 ************************************ 00:04:32.874 16:49:11 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.874 EAL: Detected CPU lcores: 128 00:04:32.874 EAL: Detected NUMA nodes: 2 00:04:32.874 EAL: Detected shared linkage of DPDK 00:04:32.874 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.874 EAL: Selected IOVA mode 'VA' 00:04:32.874 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.874 EAL: VFIO support initialized 00:04:32.874 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.874 00:04:32.874 00:04:32.874 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.874 http://cunit.sourceforge.net/ 00:04:32.874 00:04:32.874 00:04:32.874 Suite: memory 00:04:32.874 Test: test ... 00:04:32.874 register 0x200000200000 2097152 00:04:32.874 malloc 3145728 00:04:32.874 register 0x200000400000 4194304 00:04:32.874 buf 0x200000500000 len 3145728 PASSED 00:04:32.874 malloc 64 00:04:32.874 buf 0x2000004fff40 len 64 PASSED 00:04:32.874 malloc 4194304 00:04:32.874 register 0x200000800000 6291456 00:04:32.874 buf 0x200000a00000 len 4194304 PASSED 00:04:32.874 free 0x200000500000 3145728 00:04:32.874 free 0x2000004fff40 64 00:04:32.874 unregister 0x200000400000 4194304 PASSED 00:04:32.874 free 0x200000a00000 4194304 00:04:32.874 unregister 0x200000800000 6291456 PASSED 00:04:32.874 malloc 8388608 00:04:32.874 register 0x200000400000 10485760 00:04:32.874 buf 0x200000600000 len 8388608 PASSED 00:04:32.874 free 0x200000600000 8388608 00:04:32.874 unregister 0x200000400000 10485760 PASSED 00:04:32.874 passed 00:04:32.874 00:04:32.874 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.874 suites 1 1 n/a 0 0 00:04:32.874 tests 1 1 1 0 0 00:04:32.874 asserts 15 15 15 0 n/a 00:04:32.874 00:04:32.874 Elapsed time = 0.005 seconds 00:04:32.874 00:04:32.874 real 0m0.059s 00:04:32.874 user 0m0.021s 00:04:32.874 sys 0m0.037s 00:04:32.874 16:49:11 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.874 16:49:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:32.874 ************************************ 00:04:32.874 END TEST env_mem_callbacks 00:04:32.874 ************************************ 00:04:32.874 00:04:32.874 real 0m7.330s 00:04:32.874 user 0m1.007s 00:04:32.874 sys 0m0.845s 00:04:32.874 16:49:11 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.874 16:49:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.874 ************************************ 00:04:32.874 END TEST env 00:04:32.874 ************************************ 00:04:32.874 16:49:11 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:32.874 16:49:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.874 16:49:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.874 16:49:11 -- common/autotest_common.sh@10 -- # set +x 00:04:32.874 ************************************ 00:04:32.874 START TEST rpc 00:04:32.874 ************************************ 00:04:32.874 16:49:11 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:32.874 * Looking for test storage... 00:04:32.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:32.874 16:49:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1243072 00:04:32.874 16:49:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.874 16:49:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:32.874 16:49:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1243072 00:04:32.874 16:49:11 rpc -- common/autotest_common.sh@827 -- # '[' -z 1243072 ']' 00:04:32.874 16:49:11 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.874 16:49:11 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:32.874 16:49:11 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.874 16:49:11 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:32.874 16:49:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.135 [2024-05-15 16:49:11.752699] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:04:33.135 [2024-05-15 16:49:11.752760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243072 ] 00:04:33.135 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.135 [2024-05-15 16:49:11.817604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.135 [2024-05-15 16:49:11.891781] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:33.135 [2024-05-15 16:49:11.891821] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1243072' to capture a snapshot of events at runtime. 00:04:33.135 [2024-05-15 16:49:11.891829] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:33.135 [2024-05-15 16:49:11.891836] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:33.135 [2024-05-15 16:49:11.891842] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1243072 for offline analysis/debug. 00:04:33.135 [2024-05-15 16:49:11.891868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.704 16:49:12 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:33.704 16:49:12 rpc -- common/autotest_common.sh@860 -- # return 0 00:04:33.704 16:49:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:33.705 16:49:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:33.705 16:49:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.705 16:49:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.705 16:49:12 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:33.705 16:49:12 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.705 16:49:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.965 ************************************ 00:04:33.965 START TEST rpc_integrity 00:04:33.965 ************************************ 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.965 { 00:04:33.965 "name": "Malloc0", 00:04:33.965 "aliases": [ 00:04:33.965 "a441ed32-50ef-413a-836b-c6531475c31f" 00:04:33.965 ], 00:04:33.965 "product_name": "Malloc disk", 00:04:33.965 "block_size": 512, 00:04:33.965 "num_blocks": 16384, 00:04:33.965 "uuid": "a441ed32-50ef-413a-836b-c6531475c31f", 00:04:33.965 "assigned_rate_limits": { 00:04:33.965 "rw_ios_per_sec": 0, 00:04:33.965 "rw_mbytes_per_sec": 0, 00:04:33.965 "r_mbytes_per_sec": 0, 00:04:33.965 "w_mbytes_per_sec": 0 00:04:33.965 }, 00:04:33.965 "claimed": false, 00:04:33.965 "zoned": false, 00:04:33.965 "supported_io_types": { 00:04:33.965 "read": true, 00:04:33.965 "write": true, 00:04:33.965 "unmap": true, 00:04:33.965 "write_zeroes": true, 00:04:33.965 "flush": true, 00:04:33.965 "reset": true, 00:04:33.965 "compare": false, 00:04:33.965 "compare_and_write": false, 00:04:33.965 "abort": true, 00:04:33.965 "nvme_admin": false, 00:04:33.965 "nvme_io": false 00:04:33.965 }, 00:04:33.965 "memory_domains": [ 00:04:33.965 { 00:04:33.965 "dma_device_id": "system", 00:04:33.965 "dma_device_type": 1 00:04:33.965 }, 00:04:33.965 { 00:04:33.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.965 "dma_device_type": 2 00:04:33.965 } 00:04:33.965 ], 00:04:33.965 "driver_specific": {} 00:04:33.965 } 00:04:33.965 ]' 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.965 [2024-05-15 16:49:12.694880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.965 [2024-05-15 16:49:12.694914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.965 [2024-05-15 16:49:12.694925] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ea4ec0 00:04:33.965 [2024-05-15 16:49:12.694933] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.965 [2024-05-15 16:49:12.696273] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.965 [2024-05-15 16:49:12.696294] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.965 Passthru0 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.965 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.965 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.965 { 00:04:33.965 "name": "Malloc0", 00:04:33.965 "aliases": [ 00:04:33.965 "a441ed32-50ef-413a-836b-c6531475c31f" 00:04:33.965 ], 00:04:33.965 "product_name": "Malloc disk", 00:04:33.965 "block_size": 512, 00:04:33.965 "num_blocks": 16384, 00:04:33.965 "uuid": "a441ed32-50ef-413a-836b-c6531475c31f", 00:04:33.965 "assigned_rate_limits": { 00:04:33.965 "rw_ios_per_sec": 0, 00:04:33.965 "rw_mbytes_per_sec": 0, 00:04:33.965 "r_mbytes_per_sec": 0, 00:04:33.965 "w_mbytes_per_sec": 0 00:04:33.965 }, 00:04:33.965 "claimed": true, 00:04:33.965 "claim_type": "exclusive_write", 00:04:33.965 "zoned": false, 00:04:33.965 "supported_io_types": { 00:04:33.965 "read": true, 00:04:33.965 "write": true, 00:04:33.965 "unmap": true, 00:04:33.965 "write_zeroes": true, 00:04:33.965 "flush": true, 00:04:33.965 "reset": true, 00:04:33.965 "compare": false, 00:04:33.965 "compare_and_write": false, 00:04:33.965 "abort": true, 00:04:33.965 "nvme_admin": false, 00:04:33.965 "nvme_io": false 00:04:33.965 }, 00:04:33.965 "memory_domains": [ 00:04:33.965 { 00:04:33.965 "dma_device_id": "system", 00:04:33.965 "dma_device_type": 1 00:04:33.965 }, 00:04:33.965 { 00:04:33.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.966 "dma_device_type": 2 00:04:33.966 } 00:04:33.966 ], 00:04:33.966 "driver_specific": {} 00:04:33.966 }, 00:04:33.966 { 00:04:33.966 "name": "Passthru0", 00:04:33.966 "aliases": [ 00:04:33.966 "5c9a3a65-ef7b-53cc-8309-8a195e3535f1" 00:04:33.966 ], 00:04:33.966 "product_name": "passthru", 00:04:33.966 "block_size": 512, 00:04:33.966 "num_blocks": 16384, 00:04:33.966 "uuid": "5c9a3a65-ef7b-53cc-8309-8a195e3535f1", 00:04:33.966 "assigned_rate_limits": { 00:04:33.966 "rw_ios_per_sec": 0, 00:04:33.966 "rw_mbytes_per_sec": 0, 00:04:33.966 "r_mbytes_per_sec": 0, 00:04:33.966 "w_mbytes_per_sec": 0 00:04:33.966 }, 00:04:33.966 "claimed": false, 00:04:33.966 "zoned": false, 00:04:33.966 "supported_io_types": { 00:04:33.966 "read": true, 00:04:33.966 "write": true, 00:04:33.966 "unmap": true, 00:04:33.966 "write_zeroes": true, 00:04:33.966 "flush": true, 00:04:33.966 "reset": true, 00:04:33.966 "compare": false, 00:04:33.966 "compare_and_write": false, 00:04:33.966 "abort": true, 00:04:33.966 "nvme_admin": false, 00:04:33.966 "nvme_io": false 00:04:33.966 }, 00:04:33.966 "memory_domains": [ 00:04:33.966 { 00:04:33.966 "dma_device_id": "system", 00:04:33.966 "dma_device_type": 1 00:04:33.966 }, 00:04:33.966 { 00:04:33.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.966 "dma_device_type": 2 00:04:33.966 } 00:04:33.966 ], 00:04:33.966 "driver_specific": { 00:04:33.966 "passthru": { 00:04:33.966 "name": "Passthru0", 00:04:33.966 "base_bdev_name": "Malloc0" 00:04:33.966 } 00:04:33.966 } 00:04:33.966 } 00:04:33.966 ]' 00:04:33.966 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.966 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.966 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.966 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.966 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.966 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.966 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.966 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.966 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.966 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.966 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.966 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.966 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.226 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.226 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.226 16:49:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.226 00:04:34.226 real 0m0.287s 00:04:34.226 user 0m0.183s 00:04:34.226 sys 0m0.038s 00:04:34.226 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.226 16:49:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 ************************************ 00:04:34.226 END TEST rpc_integrity 00:04:34.226 ************************************ 00:04:34.226 16:49:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.226 16:49:12 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.226 16:49:12 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.226 16:49:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 ************************************ 00:04:34.226 START TEST rpc_plugins 00:04:34.226 ************************************ 00:04:34.226 16:49:12 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:34.226 16:49:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.226 16:49:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.226 16:49:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 16:49:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.226 16:49:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.226 16:49:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.226 16:49:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.226 16:49:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 16:49:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.226 16:49:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.226 { 00:04:34.226 "name": "Malloc1", 00:04:34.226 "aliases": [ 00:04:34.226 "c6943960-2ed4-4a43-b02c-2915b076bd8f" 00:04:34.226 ], 00:04:34.226 "product_name": "Malloc disk", 00:04:34.226 "block_size": 4096, 00:04:34.226 "num_blocks": 256, 00:04:34.226 "uuid": "c6943960-2ed4-4a43-b02c-2915b076bd8f", 00:04:34.226 "assigned_rate_limits": { 00:04:34.226 "rw_ios_per_sec": 0, 00:04:34.226 "rw_mbytes_per_sec": 0, 00:04:34.226 "r_mbytes_per_sec": 0, 00:04:34.226 "w_mbytes_per_sec": 0 00:04:34.226 }, 00:04:34.226 "claimed": false, 00:04:34.226 "zoned": false, 00:04:34.226 "supported_io_types": { 00:04:34.226 "read": true, 00:04:34.226 "write": true, 00:04:34.226 "unmap": true, 00:04:34.226 "write_zeroes": true, 00:04:34.226 "flush": true, 00:04:34.226 "reset": true, 00:04:34.226 "compare": false, 00:04:34.226 "compare_and_write": false, 00:04:34.226 "abort": true, 00:04:34.226 "nvme_admin": false, 00:04:34.226 "nvme_io": false 00:04:34.226 }, 00:04:34.226 "memory_domains": [ 00:04:34.226 { 00:04:34.226 "dma_device_id": "system", 00:04:34.226 "dma_device_type": 1 00:04:34.226 }, 00:04:34.226 { 00:04:34.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.226 "dma_device_type": 2 00:04:34.226 } 00:04:34.226 ], 00:04:34.226 "driver_specific": {} 00:04:34.226 } 00:04:34.226 ]' 00:04:34.226 16:49:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:34.226 16:49:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.226 16:49:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.226 16:49:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.226 16:49:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 16:49:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.226 16:49:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.226 16:49:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.226 16:49:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.226 16:49:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.226 16:49:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.226 16:49:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:34.486 16:49:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.486 00:04:34.486 real 0m0.146s 00:04:34.486 user 0m0.095s 00:04:34.486 sys 0m0.019s 00:04:34.486 16:49:13 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.486 16:49:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.486 ************************************ 00:04:34.486 END TEST rpc_plugins 00:04:34.486 ************************************ 00:04:34.486 16:49:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.486 16:49:13 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.486 16:49:13 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.486 16:49:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.486 ************************************ 00:04:34.486 START TEST rpc_trace_cmd_test 00:04:34.486 ************************************ 00:04:34.486 16:49:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:34.486 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:34.487 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1243072", 00:04:34.487 "tpoint_group_mask": "0x8", 00:04:34.487 "iscsi_conn": { 00:04:34.487 "mask": "0x2", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "scsi": { 00:04:34.487 "mask": "0x4", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "bdev": { 00:04:34.487 "mask": "0x8", 00:04:34.487 "tpoint_mask": "0xffffffffffffffff" 00:04:34.487 }, 00:04:34.487 "nvmf_rdma": { 00:04:34.487 "mask": "0x10", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "nvmf_tcp": { 00:04:34.487 "mask": "0x20", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "ftl": { 00:04:34.487 "mask": "0x40", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "blobfs": { 00:04:34.487 "mask": "0x80", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "dsa": { 00:04:34.487 "mask": "0x200", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "thread": { 00:04:34.487 "mask": "0x400", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "nvme_pcie": { 00:04:34.487 "mask": "0x800", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "iaa": { 00:04:34.487 "mask": "0x1000", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "nvme_tcp": { 00:04:34.487 "mask": "0x2000", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "bdev_nvme": { 00:04:34.487 "mask": "0x4000", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 }, 00:04:34.487 "sock": { 00:04:34.487 "mask": "0x8000", 00:04:34.487 "tpoint_mask": "0x0" 00:04:34.487 } 00:04:34.487 }' 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.487 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.747 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.747 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.747 16:49:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.747 00:04:34.747 real 0m0.248s 00:04:34.747 user 0m0.209s 00:04:34.747 sys 0m0.029s 00:04:34.747 16:49:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.747 16:49:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.747 ************************************ 00:04:34.747 END TEST rpc_trace_cmd_test 00:04:34.747 ************************************ 00:04:34.747 16:49:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:34.747 16:49:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.747 16:49:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.747 16:49:13 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.747 16:49:13 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.747 16:49:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.747 ************************************ 00:04:34.747 START TEST rpc_daemon_integrity 00:04:34.747 ************************************ 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.747 { 00:04:34.747 "name": "Malloc2", 00:04:34.747 "aliases": [ 00:04:34.747 "3ca94ad3-e40d-4ae0-8bdd-b948f3d5bb5b" 00:04:34.747 ], 00:04:34.747 "product_name": "Malloc disk", 00:04:34.747 "block_size": 512, 00:04:34.747 "num_blocks": 16384, 00:04:34.747 "uuid": "3ca94ad3-e40d-4ae0-8bdd-b948f3d5bb5b", 00:04:34.747 "assigned_rate_limits": { 00:04:34.747 "rw_ios_per_sec": 0, 00:04:34.747 "rw_mbytes_per_sec": 0, 00:04:34.747 "r_mbytes_per_sec": 0, 00:04:34.747 "w_mbytes_per_sec": 0 00:04:34.747 }, 00:04:34.747 "claimed": false, 00:04:34.747 "zoned": false, 00:04:34.747 "supported_io_types": { 00:04:34.747 "read": true, 00:04:34.747 "write": true, 00:04:34.747 "unmap": true, 00:04:34.747 "write_zeroes": true, 00:04:34.747 "flush": true, 00:04:34.747 "reset": true, 00:04:34.747 "compare": false, 00:04:34.747 "compare_and_write": false, 00:04:34.747 "abort": true, 00:04:34.747 "nvme_admin": false, 00:04:34.747 "nvme_io": false 00:04:34.747 }, 00:04:34.747 "memory_domains": [ 00:04:34.747 { 00:04:34.747 "dma_device_id": "system", 00:04:34.747 "dma_device_type": 1 00:04:34.747 }, 00:04:34.747 { 00:04:34.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.747 "dma_device_type": 2 00:04:34.747 } 00:04:34.747 ], 00:04:34.747 "driver_specific": {} 00:04:34.747 } 00:04:34.747 ]' 00:04:34.747 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.007 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.007 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:35.007 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.007 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.007 [2024-05-15 16:49:13.601328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:35.007 [2024-05-15 16:49:13.601359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.007 [2024-05-15 16:49:13.601372] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2048cb0 00:04:35.007 [2024-05-15 16:49:13.601380] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.007 [2024-05-15 16:49:13.602593] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.007 [2024-05-15 16:49:13.602614] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.007 Passthru0 00:04:35.007 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.007 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.007 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.007 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.007 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.007 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.007 { 00:04:35.007 "name": "Malloc2", 00:04:35.007 "aliases": [ 00:04:35.007 "3ca94ad3-e40d-4ae0-8bdd-b948f3d5bb5b" 00:04:35.007 ], 00:04:35.007 "product_name": "Malloc disk", 00:04:35.007 "block_size": 512, 00:04:35.007 "num_blocks": 16384, 00:04:35.007 "uuid": "3ca94ad3-e40d-4ae0-8bdd-b948f3d5bb5b", 00:04:35.007 "assigned_rate_limits": { 00:04:35.007 "rw_ios_per_sec": 0, 00:04:35.007 "rw_mbytes_per_sec": 0, 00:04:35.007 "r_mbytes_per_sec": 0, 00:04:35.007 "w_mbytes_per_sec": 0 00:04:35.007 }, 00:04:35.007 "claimed": true, 00:04:35.007 "claim_type": "exclusive_write", 00:04:35.007 "zoned": false, 00:04:35.007 "supported_io_types": { 00:04:35.007 "read": true, 00:04:35.007 "write": true, 00:04:35.007 "unmap": true, 00:04:35.007 "write_zeroes": true, 00:04:35.007 "flush": true, 00:04:35.007 "reset": true, 00:04:35.007 "compare": false, 00:04:35.007 "compare_and_write": false, 00:04:35.007 "abort": true, 00:04:35.007 "nvme_admin": false, 00:04:35.007 "nvme_io": false 00:04:35.007 }, 00:04:35.007 "memory_domains": [ 00:04:35.007 { 00:04:35.007 "dma_device_id": "system", 00:04:35.007 "dma_device_type": 1 00:04:35.007 }, 00:04:35.007 { 00:04:35.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.007 "dma_device_type": 2 00:04:35.007 } 00:04:35.007 ], 00:04:35.007 "driver_specific": {} 00:04:35.007 }, 00:04:35.007 { 00:04:35.007 "name": "Passthru0", 00:04:35.007 "aliases": [ 00:04:35.007 "131c318a-210c-5537-a236-7bdf6262d2db" 00:04:35.007 ], 00:04:35.007 "product_name": "passthru", 00:04:35.007 "block_size": 512, 00:04:35.007 "num_blocks": 16384, 00:04:35.007 "uuid": "131c318a-210c-5537-a236-7bdf6262d2db", 00:04:35.007 "assigned_rate_limits": { 00:04:35.007 "rw_ios_per_sec": 0, 00:04:35.007 "rw_mbytes_per_sec": 0, 00:04:35.007 "r_mbytes_per_sec": 0, 00:04:35.007 "w_mbytes_per_sec": 0 00:04:35.007 }, 00:04:35.007 "claimed": false, 00:04:35.007 "zoned": false, 00:04:35.007 "supported_io_types": { 00:04:35.007 "read": true, 00:04:35.007 "write": true, 00:04:35.008 "unmap": true, 00:04:35.008 "write_zeroes": true, 00:04:35.008 "flush": true, 00:04:35.008 "reset": true, 00:04:35.008 "compare": false, 00:04:35.008 "compare_and_write": false, 00:04:35.008 "abort": true, 00:04:35.008 "nvme_admin": false, 00:04:35.008 "nvme_io": false 00:04:35.008 }, 00:04:35.008 "memory_domains": [ 00:04:35.008 { 00:04:35.008 "dma_device_id": "system", 00:04:35.008 "dma_device_type": 1 00:04:35.008 }, 00:04:35.008 { 00:04:35.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.008 "dma_device_type": 2 00:04:35.008 } 00:04:35.008 ], 00:04:35.008 "driver_specific": { 00:04:35.008 "passthru": { 00:04:35.008 "name": "Passthru0", 00:04:35.008 "base_bdev_name": "Malloc2" 00:04:35.008 } 00:04:35.008 } 00:04:35.008 } 00:04:35.008 ]' 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.008 00:04:35.008 real 0m0.285s 00:04:35.008 user 0m0.180s 00:04:35.008 sys 0m0.033s 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.008 16:49:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.008 ************************************ 00:04:35.008 END TEST rpc_daemon_integrity 00:04:35.008 ************************************ 00:04:35.008 16:49:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:35.008 16:49:13 rpc -- rpc/rpc.sh@84 -- # killprocess 1243072 00:04:35.008 16:49:13 rpc -- common/autotest_common.sh@946 -- # '[' -z 1243072 ']' 00:04:35.008 16:49:13 rpc -- common/autotest_common.sh@950 -- # kill -0 1243072 00:04:35.008 16:49:13 rpc -- common/autotest_common.sh@951 -- # uname 00:04:35.008 16:49:13 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:35.008 16:49:13 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1243072 00:04:35.008 16:49:13 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:35.008 16:49:13 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:35.008 16:49:13 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1243072' 00:04:35.008 killing process with pid 1243072 00:04:35.008 16:49:13 rpc -- common/autotest_common.sh@965 -- # kill 1243072 00:04:35.008 16:49:13 rpc -- common/autotest_common.sh@970 -- # wait 1243072 00:04:35.267 00:04:35.267 real 0m2.452s 00:04:35.267 user 0m3.222s 00:04:35.267 sys 0m0.673s 00:04:35.267 16:49:14 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.267 16:49:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.267 ************************************ 00:04:35.267 END TEST rpc 00:04:35.267 ************************************ 00:04:35.267 16:49:14 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:35.267 16:49:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.267 16:49:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.267 16:49:14 -- common/autotest_common.sh@10 -- # set +x 00:04:35.528 ************************************ 00:04:35.528 START TEST skip_rpc 00:04:35.528 ************************************ 00:04:35.528 16:49:14 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:35.528 * Looking for test storage... 00:04:35.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.528 16:49:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:35.528 16:49:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:35.528 16:49:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:35.528 16:49:14 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.528 16:49:14 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.528 16:49:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.528 ************************************ 00:04:35.528 START TEST skip_rpc 00:04:35.528 ************************************ 00:04:35.528 16:49:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:35.528 16:49:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1243604 00:04:35.528 16:49:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.528 16:49:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:35.528 16:49:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:35.528 [2024-05-15 16:49:14.317849] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:04:35.528 [2024-05-15 16:49:14.317908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243604 ] 00:04:35.528 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.788 [2024-05-15 16:49:14.380369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.788 [2024-05-15 16:49:14.445844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1243604 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 1243604 ']' 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 1243604 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1243604 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1243604' 00:04:41.070 killing process with pid 1243604 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 1243604 00:04:41.070 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 1243604 00:04:41.070 00:04:41.070 real 0m5.253s 00:04:41.070 user 0m5.041s 00:04:41.070 sys 0m0.221s 00:04:41.071 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.071 16:49:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.071 ************************************ 00:04:41.071 END TEST skip_rpc 00:04:41.071 ************************************ 00:04:41.071 16:49:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:41.071 16:49:19 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.071 16:49:19 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.071 16:49:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.071 ************************************ 00:04:41.071 START TEST skip_rpc_with_json 00:04:41.071 ************************************ 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1244797 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1244797 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 1244797 ']' 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:41.071 16:49:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.071 [2024-05-15 16:49:19.658256] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:04:41.071 [2024-05-15 16:49:19.658309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244797 ] 00:04:41.071 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.071 [2024-05-15 16:49:19.718384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.071 [2024-05-15 16:49:19.788078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.640 [2024-05-15 16:49:20.426611] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:41.640 request: 00:04:41.640 { 00:04:41.640 "trtype": "tcp", 00:04:41.640 "method": "nvmf_get_transports", 00:04:41.640 "req_id": 1 00:04:41.640 } 00:04:41.640 Got JSON-RPC error response 00:04:41.640 response: 00:04:41.640 { 00:04:41.640 "code": -19, 00:04:41.640 "message": "No such device" 00:04:41.640 } 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.640 [2024-05-15 16:49:20.438723] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.640 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.901 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.901 16:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:41.901 { 00:04:41.901 "subsystems": [ 00:04:41.901 { 00:04:41.901 "subsystem": "vfio_user_target", 00:04:41.901 "config": null 00:04:41.901 }, 00:04:41.901 { 00:04:41.901 "subsystem": "keyring", 00:04:41.901 "config": [] 00:04:41.901 }, 00:04:41.901 { 00:04:41.901 "subsystem": "iobuf", 00:04:41.901 "config": [ 00:04:41.901 { 00:04:41.901 "method": "iobuf_set_options", 00:04:41.901 "params": { 00:04:41.901 "small_pool_count": 8192, 00:04:41.901 "large_pool_count": 1024, 00:04:41.901 "small_bufsize": 8192, 00:04:41.901 "large_bufsize": 135168 00:04:41.901 } 00:04:41.901 } 00:04:41.901 ] 00:04:41.901 }, 00:04:41.901 { 00:04:41.901 "subsystem": "sock", 00:04:41.901 "config": [ 00:04:41.901 { 00:04:41.901 "method": "sock_impl_set_options", 00:04:41.901 "params": { 00:04:41.901 "impl_name": "posix", 00:04:41.901 "recv_buf_size": 2097152, 00:04:41.901 "send_buf_size": 2097152, 00:04:41.901 "enable_recv_pipe": true, 00:04:41.901 "enable_quickack": false, 00:04:41.901 "enable_placement_id": 0, 00:04:41.901 "enable_zerocopy_send_server": true, 00:04:41.901 "enable_zerocopy_send_client": false, 00:04:41.901 "zerocopy_threshold": 0, 00:04:41.901 "tls_version": 0, 00:04:41.901 "enable_ktls": false 00:04:41.901 } 00:04:41.901 }, 00:04:41.901 { 00:04:41.901 "method": "sock_impl_set_options", 00:04:41.901 "params": { 00:04:41.901 "impl_name": "ssl", 00:04:41.901 "recv_buf_size": 4096, 00:04:41.901 "send_buf_size": 4096, 00:04:41.901 "enable_recv_pipe": true, 00:04:41.901 "enable_quickack": false, 00:04:41.901 "enable_placement_id": 0, 00:04:41.901 "enable_zerocopy_send_server": true, 00:04:41.901 "enable_zerocopy_send_client": false, 00:04:41.901 "zerocopy_threshold": 0, 00:04:41.901 "tls_version": 0, 00:04:41.901 "enable_ktls": false 00:04:41.901 } 00:04:41.901 } 00:04:41.901 ] 00:04:41.901 }, 00:04:41.901 { 00:04:41.901 "subsystem": "vmd", 00:04:41.901 "config": [] 00:04:41.901 }, 00:04:41.901 { 00:04:41.901 "subsystem": "accel", 00:04:41.901 "config": [ 00:04:41.901 { 00:04:41.901 "method": "accel_set_options", 00:04:41.901 "params": { 00:04:41.901 "small_cache_size": 128, 00:04:41.901 "large_cache_size": 16, 00:04:41.901 "task_count": 2048, 00:04:41.901 "sequence_count": 2048, 00:04:41.901 "buf_count": 2048 00:04:41.901 } 00:04:41.901 } 00:04:41.901 ] 00:04:41.901 }, 00:04:41.901 { 00:04:41.901 "subsystem": "bdev", 00:04:41.901 "config": [ 00:04:41.901 { 00:04:41.901 "method": "bdev_set_options", 00:04:41.901 "params": { 00:04:41.901 "bdev_io_pool_size": 65535, 00:04:41.901 "bdev_io_cache_size": 256, 00:04:41.901 "bdev_auto_examine": true, 00:04:41.901 "iobuf_small_cache_size": 128, 00:04:41.901 "iobuf_large_cache_size": 16 00:04:41.901 } 00:04:41.901 }, 00:04:41.901 { 00:04:41.901 "method": "bdev_raid_set_options", 00:04:41.901 "params": { 00:04:41.901 "process_window_size_kb": 1024 00:04:41.901 } 00:04:41.901 }, 00:04:41.901 { 00:04:41.901 "method": "bdev_iscsi_set_options", 00:04:41.901 "params": { 00:04:41.901 "timeout_sec": 30 00:04:41.901 } 00:04:41.901 }, 00:04:41.901 { 00:04:41.901 "method": "bdev_nvme_set_options", 00:04:41.901 "params": { 00:04:41.901 "action_on_timeout": "none", 00:04:41.901 "timeout_us": 0, 00:04:41.901 "timeout_admin_us": 0, 00:04:41.901 "keep_alive_timeout_ms": 10000, 00:04:41.901 "arbitration_burst": 0, 00:04:41.901 "low_priority_weight": 0, 00:04:41.901 "medium_priority_weight": 0, 00:04:41.901 "high_priority_weight": 0, 00:04:41.901 "nvme_adminq_poll_period_us": 10000, 00:04:41.901 "nvme_ioq_poll_period_us": 0, 00:04:41.901 "io_queue_requests": 0, 00:04:41.901 "delay_cmd_submit": true, 00:04:41.901 "transport_retry_count": 4, 00:04:41.901 "bdev_retry_count": 3, 00:04:41.901 "transport_ack_timeout": 0, 00:04:41.901 "ctrlr_loss_timeout_sec": 0, 00:04:41.901 "reconnect_delay_sec": 0, 00:04:41.901 "fast_io_fail_timeout_sec": 0, 00:04:41.901 "disable_auto_failback": false, 00:04:41.901 "generate_uuids": false, 00:04:41.901 "transport_tos": 0, 00:04:41.901 "nvme_error_stat": false, 00:04:41.901 "rdma_srq_size": 0, 00:04:41.901 "io_path_stat": false, 00:04:41.901 "allow_accel_sequence": false, 00:04:41.901 "rdma_max_cq_size": 0, 00:04:41.901 "rdma_cm_event_timeout_ms": 0, 00:04:41.901 "dhchap_digests": [ 00:04:41.901 "sha256", 00:04:41.901 "sha384", 00:04:41.901 "sha512" 00:04:41.901 ], 00:04:41.901 "dhchap_dhgroups": [ 00:04:41.901 "null", 00:04:41.901 "ffdhe2048", 00:04:41.901 "ffdhe3072", 00:04:41.901 "ffdhe4096", 00:04:41.901 "ffdhe6144", 00:04:41.901 "ffdhe8192" 00:04:41.901 ] 00:04:41.901 } 00:04:41.901 }, 00:04:41.901 { 00:04:41.902 "method": "bdev_nvme_set_hotplug", 00:04:41.902 "params": { 00:04:41.902 "period_us": 100000, 00:04:41.902 "enable": false 00:04:41.902 } 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "method": "bdev_wait_for_examine" 00:04:41.902 } 00:04:41.902 ] 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "subsystem": "scsi", 00:04:41.902 "config": null 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "subsystem": "scheduler", 00:04:41.902 "config": [ 00:04:41.902 { 00:04:41.902 "method": "framework_set_scheduler", 00:04:41.902 "params": { 00:04:41.902 "name": "static" 00:04:41.902 } 00:04:41.902 } 00:04:41.902 ] 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "subsystem": "vhost_scsi", 00:04:41.902 "config": [] 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "subsystem": "vhost_blk", 00:04:41.902 "config": [] 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "subsystem": "ublk", 00:04:41.902 "config": [] 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "subsystem": "nbd", 00:04:41.902 "config": [] 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "subsystem": "nvmf", 00:04:41.902 "config": [ 00:04:41.902 { 00:04:41.902 "method": "nvmf_set_config", 00:04:41.902 "params": { 00:04:41.902 "discovery_filter": "match_any", 00:04:41.902 "admin_cmd_passthru": { 00:04:41.902 "identify_ctrlr": false 00:04:41.902 } 00:04:41.902 } 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "method": "nvmf_set_max_subsystems", 00:04:41.902 "params": { 00:04:41.902 "max_subsystems": 1024 00:04:41.902 } 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "method": "nvmf_set_crdt", 00:04:41.902 "params": { 00:04:41.902 "crdt1": 0, 00:04:41.902 "crdt2": 0, 00:04:41.902 "crdt3": 0 00:04:41.902 } 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "method": "nvmf_create_transport", 00:04:41.902 "params": { 00:04:41.902 "trtype": "TCP", 00:04:41.902 "max_queue_depth": 128, 00:04:41.902 "max_io_qpairs_per_ctrlr": 127, 00:04:41.902 "in_capsule_data_size": 4096, 00:04:41.902 "max_io_size": 131072, 00:04:41.902 "io_unit_size": 131072, 00:04:41.902 "max_aq_depth": 128, 00:04:41.902 "num_shared_buffers": 511, 00:04:41.902 "buf_cache_size": 4294967295, 00:04:41.902 "dif_insert_or_strip": false, 00:04:41.902 "zcopy": false, 00:04:41.902 "c2h_success": true, 00:04:41.902 "sock_priority": 0, 00:04:41.902 "abort_timeout_sec": 1, 00:04:41.902 "ack_timeout": 0, 00:04:41.902 "data_wr_pool_size": 0 00:04:41.902 } 00:04:41.902 } 00:04:41.902 ] 00:04:41.902 }, 00:04:41.902 { 00:04:41.902 "subsystem": "iscsi", 00:04:41.902 "config": [ 00:04:41.902 { 00:04:41.902 "method": "iscsi_set_options", 00:04:41.902 "params": { 00:04:41.902 "node_base": "iqn.2016-06.io.spdk", 00:04:41.902 "max_sessions": 128, 00:04:41.902 "max_connections_per_session": 2, 00:04:41.902 "max_queue_depth": 64, 00:04:41.902 "default_time2wait": 2, 00:04:41.902 "default_time2retain": 20, 00:04:41.902 "first_burst_length": 8192, 00:04:41.902 "immediate_data": true, 00:04:41.902 "allow_duplicated_isid": false, 00:04:41.902 "error_recovery_level": 0, 00:04:41.902 "nop_timeout": 60, 00:04:41.902 "nop_in_interval": 30, 00:04:41.902 "disable_chap": false, 00:04:41.902 "require_chap": false, 00:04:41.902 "mutual_chap": false, 00:04:41.902 "chap_group": 0, 00:04:41.902 "max_large_datain_per_connection": 64, 00:04:41.902 "max_r2t_per_connection": 4, 00:04:41.902 "pdu_pool_size": 36864, 00:04:41.902 "immediate_data_pool_size": 16384, 00:04:41.902 "data_out_pool_size": 2048 00:04:41.902 } 00:04:41.902 } 00:04:41.902 ] 00:04:41.902 } 00:04:41.902 ] 00:04:41.902 } 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1244797 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1244797 ']' 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1244797 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1244797 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1244797' 00:04:41.902 killing process with pid 1244797 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1244797 00:04:41.902 16:49:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1244797 00:04:42.162 16:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.162 16:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1244976 00:04:42.162 16:49:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:47.445 16:49:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1244976 00:04:47.445 16:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1244976 ']' 00:04:47.445 16:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1244976 00:04:47.445 16:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:47.445 16:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:47.445 16:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1244976 00:04:47.445 16:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:47.445 16:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:47.445 16:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1244976' 00:04:47.445 killing process with pid 1244976 00:04:47.445 16:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1244976 00:04:47.445 16:49:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1244976 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:47.445 00:04:47.445 real 0m6.547s 00:04:47.445 user 0m6.453s 00:04:47.445 sys 0m0.502s 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.445 ************************************ 00:04:47.445 END TEST skip_rpc_with_json 00:04:47.445 ************************************ 00:04:47.445 16:49:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:47.445 16:49:26 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.445 16:49:26 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.445 16:49:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.445 ************************************ 00:04:47.445 START TEST skip_rpc_with_delay 00:04:47.445 ************************************ 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:47.445 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:47.706 [2024-05-15 16:49:26.280442] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:47.706 [2024-05-15 16:49:26.280539] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:47.706 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:47.706 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:47.706 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:47.706 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:47.706 00:04:47.706 real 0m0.073s 00:04:47.706 user 0m0.039s 00:04:47.706 sys 0m0.033s 00:04:47.706 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.706 16:49:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:47.706 ************************************ 00:04:47.706 END TEST skip_rpc_with_delay 00:04:47.706 ************************************ 00:04:47.706 16:49:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:47.706 16:49:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:47.706 16:49:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:47.706 16:49:26 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.706 16:49:26 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.706 16:49:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.706 ************************************ 00:04:47.706 START TEST exit_on_failed_rpc_init 00:04:47.706 ************************************ 00:04:47.706 16:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:47.706 16:49:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1246256 00:04:47.706 16:49:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1246256 00:04:47.706 16:49:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.706 16:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 1246256 ']' 00:04:47.706 16:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.706 16:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:47.706 16:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.706 16:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:47.706 16:49:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:47.706 [2024-05-15 16:49:26.446881] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:04:47.706 [2024-05-15 16:49:26.446945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246256 ] 00:04:47.706 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.706 [2024-05-15 16:49:26.511887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.966 [2024-05-15 16:49:26.587014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:48.536 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.536 [2024-05-15 16:49:27.277692] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:04:48.536 [2024-05-15 16:49:27.277742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246369 ] 00:04:48.536 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.536 [2024-05-15 16:49:27.352483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.796 [2024-05-15 16:49:27.416921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.796 [2024-05-15 16:49:27.416982] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:48.796 [2024-05-15 16:49:27.416992] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:48.796 [2024-05-15 16:49:27.416998] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1246256 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 1246256 ']' 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 1246256 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1246256 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1246256' 00:04:48.796 killing process with pid 1246256 00:04:48.796 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 1246256 00:04:48.797 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 1246256 00:04:49.057 00:04:49.057 real 0m1.349s 00:04:49.057 user 0m1.604s 00:04:49.057 sys 0m0.353s 00:04:49.057 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.057 16:49:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.057 ************************************ 00:04:49.057 END TEST exit_on_failed_rpc_init 00:04:49.057 ************************************ 00:04:49.057 16:49:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:49.057 00:04:49.057 real 0m13.647s 00:04:49.057 user 0m13.292s 00:04:49.057 sys 0m1.393s 00:04:49.057 16:49:27 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.057 16:49:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.057 ************************************ 00:04:49.057 END TEST skip_rpc 00:04:49.057 ************************************ 00:04:49.057 16:49:27 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:49.057 16:49:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.057 16:49:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.057 16:49:27 -- common/autotest_common.sh@10 -- # set +x 00:04:49.057 ************************************ 00:04:49.057 START TEST rpc_client 00:04:49.057 ************************************ 00:04:49.057 16:49:27 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:49.317 * Looking for test storage... 00:04:49.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:49.317 16:49:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:49.317 OK 00:04:49.317 16:49:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:49.317 00:04:49.317 real 0m0.132s 00:04:49.317 user 0m0.060s 00:04:49.317 sys 0m0.081s 00:04:49.318 16:49:27 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.318 16:49:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:49.318 ************************************ 00:04:49.318 END TEST rpc_client 00:04:49.318 ************************************ 00:04:49.318 16:49:28 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:49.318 16:49:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.318 16:49:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.318 16:49:28 -- common/autotest_common.sh@10 -- # set +x 00:04:49.318 ************************************ 00:04:49.318 START TEST json_config 00:04:49.318 ************************************ 00:04:49.318 16:49:28 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:49.579 16:49:28 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:49.579 16:49:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:49.580 16:49:28 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.580 16:49:28 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.580 16:49:28 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.580 16:49:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.580 16:49:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.580 16:49:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.580 16:49:28 json_config -- paths/export.sh@5 -- # export PATH 00:04:49.580 16:49:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@47 -- # : 0 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:49.580 16:49:28 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:49.580 INFO: JSON configuration test init 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:49.580 16:49:28 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:49.580 16:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:49.580 16:49:28 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:49.580 16:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.580 16:49:28 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:49.580 16:49:28 json_config -- json_config/common.sh@9 -- # local app=target 00:04:49.580 16:49:28 json_config -- json_config/common.sh@10 -- # shift 00:04:49.580 16:49:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.580 16:49:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.580 16:49:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.580 16:49:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.580 16:49:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.580 16:49:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1246767 00:04:49.580 16:49:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.580 Waiting for target to run... 00:04:49.580 16:49:28 json_config -- json_config/common.sh@25 -- # waitforlisten 1246767 /var/tmp/spdk_tgt.sock 00:04:49.580 16:49:28 json_config -- common/autotest_common.sh@827 -- # '[' -z 1246767 ']' 00:04:49.580 16:49:28 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.580 16:49:28 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:49.580 16:49:28 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.580 16:49:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:49.580 16:49:28 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:49.580 16:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.580 [2024-05-15 16:49:28.255458] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:04:49.580 [2024-05-15 16:49:28.255528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246767 ] 00:04:49.580 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.841 [2024-05-15 16:49:28.606710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.841 [2024-05-15 16:49:28.658553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.413 16:49:29 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:50.413 16:49:29 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:50.413 16:49:29 json_config -- json_config/common.sh@26 -- # echo '' 00:04:50.413 00:04:50.413 16:49:29 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:50.413 16:49:29 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:50.413 16:49:29 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:50.413 16:49:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.413 16:49:29 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:50.413 16:49:29 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:50.413 16:49:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.413 16:49:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.413 16:49:29 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:50.413 16:49:29 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:50.413 16:49:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:50.984 16:49:29 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:50.984 16:49:29 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:50.984 16:49:29 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:50.984 16:49:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.984 16:49:29 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:50.984 16:49:29 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:50.984 16:49:29 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:50.984 16:49:29 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:50.984 16:49:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:50.984 16:49:29 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:50.984 16:49:29 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:50.984 16:49:29 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:50.984 16:49:29 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:50.984 16:49:29 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:50.984 16:49:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.984 16:49:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:51.246 16:49:29 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:51.246 16:49:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:51.246 16:49:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:51.246 MallocForNvmf0 00:04:51.246 16:49:29 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:51.246 16:49:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:51.508 MallocForNvmf1 00:04:51.508 16:49:30 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:51.508 16:49:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:51.508 [2024-05-15 16:49:30.306487] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.508 16:49:30 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:51.508 16:49:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:51.769 16:49:30 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:51.769 16:49:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:52.029 16:49:30 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:52.029 16:49:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:52.029 16:49:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:52.029 16:49:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:52.290 [2024-05-15 16:49:30.932123] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:52.290 [2024-05-15 16:49:30.932752] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:52.290 16:49:30 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:52.290 16:49:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.290 16:49:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.290 16:49:30 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:52.290 16:49:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.290 16:49:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.290 16:49:31 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:52.290 16:49:31 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:52.290 16:49:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:52.551 MallocBdevForConfigChangeCheck 00:04:52.551 16:49:31 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:52.551 16:49:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.551 16:49:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.551 16:49:31 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:52.551 16:49:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:52.811 16:49:31 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:52.811 INFO: shutting down applications... 00:04:52.811 16:49:31 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:52.811 16:49:31 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:52.811 16:49:31 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:52.811 16:49:31 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:53.072 Calling clear_iscsi_subsystem 00:04:53.072 Calling clear_nvmf_subsystem 00:04:53.072 Calling clear_nbd_subsystem 00:04:53.072 Calling clear_ublk_subsystem 00:04:53.072 Calling clear_vhost_blk_subsystem 00:04:53.072 Calling clear_vhost_scsi_subsystem 00:04:53.072 Calling clear_bdev_subsystem 00:04:53.333 16:49:31 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:53.333 16:49:31 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:53.333 16:49:31 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:53.333 16:49:31 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:53.333 16:49:31 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:53.333 16:49:31 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:53.594 16:49:32 json_config -- json_config/json_config.sh@345 -- # break 00:04:53.594 16:49:32 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:53.594 16:49:32 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:53.594 16:49:32 json_config -- json_config/common.sh@31 -- # local app=target 00:04:53.594 16:49:32 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.594 16:49:32 json_config -- json_config/common.sh@35 -- # [[ -n 1246767 ]] 00:04:53.594 16:49:32 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1246767 00:04:53.594 [2024-05-15 16:49:32.245383] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:53.594 16:49:32 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.594 16:49:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.594 16:49:32 json_config -- json_config/common.sh@41 -- # kill -0 1246767 00:04:53.594 16:49:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.166 16:49:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.166 16:49:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.166 16:49:32 json_config -- json_config/common.sh@41 -- # kill -0 1246767 00:04:54.166 16:49:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:54.166 16:49:32 json_config -- json_config/common.sh@43 -- # break 00:04:54.166 16:49:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:54.166 16:49:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:54.166 SPDK target shutdown done 00:04:54.166 16:49:32 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:54.166 INFO: relaunching applications... 00:04:54.166 16:49:32 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.166 16:49:32 json_config -- json_config/common.sh@9 -- # local app=target 00:04:54.166 16:49:32 json_config -- json_config/common.sh@10 -- # shift 00:04:54.166 16:49:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:54.166 16:49:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:54.166 16:49:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:54.166 16:49:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.166 16:49:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.166 16:49:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1247644 00:04:54.166 16:49:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:54.166 Waiting for target to run... 00:04:54.166 16:49:32 json_config -- json_config/common.sh@25 -- # waitforlisten 1247644 /var/tmp/spdk_tgt.sock 00:04:54.166 16:49:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.166 16:49:32 json_config -- common/autotest_common.sh@827 -- # '[' -z 1247644 ']' 00:04:54.166 16:49:32 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:54.166 16:49:32 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:54.166 16:49:32 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:54.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:54.166 16:49:32 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:54.166 16:49:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.166 [2024-05-15 16:49:32.805753] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:04:54.166 [2024-05-15 16:49:32.805823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247644 ] 00:04:54.166 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.425 [2024-05-15 16:49:33.051125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.425 [2024-05-15 16:49:33.103058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.996 [2024-05-15 16:49:33.592687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.997 [2024-05-15 16:49:33.624667] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:54.997 [2024-05-15 16:49:33.625052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:54.997 16:49:33 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:54.997 16:49:33 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:54.997 16:49:33 json_config -- json_config/common.sh@26 -- # echo '' 00:04:54.997 00:04:54.997 16:49:33 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:54.997 16:49:33 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:54.997 INFO: Checking if target configuration is the same... 00:04:54.997 16:49:33 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.997 16:49:33 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:54.997 16:49:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.997 + '[' 2 -ne 2 ']' 00:04:54.997 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:54.997 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:54.997 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:54.997 +++ basename /dev/fd/62 00:04:54.997 ++ mktemp /tmp/62.XXX 00:04:54.997 + tmp_file_1=/tmp/62.rD6 00:04:54.997 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.997 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:54.997 + tmp_file_2=/tmp/spdk_tgt_config.json.2tZ 00:04:54.997 + ret=0 00:04:54.997 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.257 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.258 + diff -u /tmp/62.rD6 /tmp/spdk_tgt_config.json.2tZ 00:04:55.258 + echo 'INFO: JSON config files are the same' 00:04:55.258 INFO: JSON config files are the same 00:04:55.258 + rm /tmp/62.rD6 /tmp/spdk_tgt_config.json.2tZ 00:04:55.258 + exit 0 00:04:55.258 16:49:34 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:55.258 16:49:34 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:55.258 INFO: changing configuration and checking if this can be detected... 00:04:55.258 16:49:34 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.258 16:49:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.518 16:49:34 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.518 16:49:34 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:55.518 16:49:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.518 + '[' 2 -ne 2 ']' 00:04:55.518 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:55.518 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:55.518 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:55.518 +++ basename /dev/fd/62 00:04:55.518 ++ mktemp /tmp/62.XXX 00:04:55.518 + tmp_file_1=/tmp/62.tnL 00:04:55.518 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.518 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:55.518 + tmp_file_2=/tmp/spdk_tgt_config.json.wIA 00:04:55.518 + ret=0 00:04:55.518 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.779 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.779 + diff -u /tmp/62.tnL /tmp/spdk_tgt_config.json.wIA 00:04:55.779 + ret=1 00:04:55.779 + echo '=== Start of file: /tmp/62.tnL ===' 00:04:55.779 + cat /tmp/62.tnL 00:04:55.779 + echo '=== End of file: /tmp/62.tnL ===' 00:04:55.779 + echo '' 00:04:55.779 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wIA ===' 00:04:55.779 + cat /tmp/spdk_tgt_config.json.wIA 00:04:55.779 + echo '=== End of file: /tmp/spdk_tgt_config.json.wIA ===' 00:04:55.779 + echo '' 00:04:55.779 + rm /tmp/62.tnL /tmp/spdk_tgt_config.json.wIA 00:04:55.779 + exit 1 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:55.779 INFO: configuration change detected. 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:55.779 16:49:34 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:55.779 16:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@317 -- # [[ -n 1247644 ]] 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:55.779 16:49:34 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:55.779 16:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:55.779 16:49:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.779 16:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.779 16:49:34 json_config -- json_config/json_config.sh@323 -- # killprocess 1247644 00:04:55.779 16:49:34 json_config -- common/autotest_common.sh@946 -- # '[' -z 1247644 ']' 00:04:55.779 16:49:34 json_config -- common/autotest_common.sh@950 -- # kill -0 1247644 00:04:55.779 16:49:34 json_config -- common/autotest_common.sh@951 -- # uname 00:04:55.779 16:49:34 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:56.040 16:49:34 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1247644 00:04:56.040 16:49:34 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:56.040 16:49:34 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:56.040 16:49:34 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1247644' 00:04:56.040 killing process with pid 1247644 00:04:56.040 16:49:34 json_config -- common/autotest_common.sh@965 -- # kill 1247644 00:04:56.040 [2024-05-15 16:49:34.660649] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:56.040 16:49:34 json_config -- common/autotest_common.sh@970 -- # wait 1247644 00:04:56.302 16:49:34 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.302 16:49:34 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:56.302 16:49:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.302 16:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.302 16:49:34 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:56.302 16:49:34 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:56.302 INFO: Success 00:04:56.302 00:04:56.302 real 0m6.909s 00:04:56.302 user 0m8.364s 00:04:56.302 sys 0m1.742s 00:04:56.302 16:49:34 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.302 16:49:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.302 ************************************ 00:04:56.302 END TEST json_config 00:04:56.302 ************************************ 00:04:56.302 16:49:35 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:56.302 16:49:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.302 16:49:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.302 16:49:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.302 ************************************ 00:04:56.302 START TEST json_config_extra_key 00:04:56.302 ************************************ 00:04:56.302 16:49:35 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:56.302 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.302 16:49:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:56.564 16:49:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.564 16:49:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.564 16:49:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.564 16:49:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.564 16:49:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.564 16:49:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.564 16:49:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:56.564 16:49:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.564 16:49:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.564 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:56.564 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:56.564 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:56.564 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:56.564 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:56.564 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:56.565 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:56.565 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:56.565 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:56.565 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.565 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:56.565 INFO: launching applications... 00:04:56.565 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:56.565 16:49:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:56.565 16:49:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:56.565 16:49:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.565 16:49:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.565 16:49:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.565 16:49:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.565 16:49:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.565 16:49:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1248395 00:04:56.565 16:49:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.565 Waiting for target to run... 00:04:56.565 16:49:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1248395 /var/tmp/spdk_tgt.sock 00:04:56.565 16:49:35 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 1248395 ']' 00:04:56.565 16:49:35 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.565 16:49:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:56.565 16:49:35 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:56.565 16:49:35 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.565 16:49:35 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:56.565 16:49:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:56.565 [2024-05-15 16:49:35.217287] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:04:56.565 [2024-05-15 16:49:35.217347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248395 ] 00:04:56.565 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.825 [2024-05-15 16:49:35.476914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.825 [2024-05-15 16:49:35.529972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.471 16:49:35 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:57.471 16:49:35 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:57.471 16:49:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:57.471 00:04:57.471 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:57.471 INFO: shutting down applications... 00:04:57.471 16:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:57.471 16:49:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:57.471 16:49:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:57.471 16:49:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1248395 ]] 00:04:57.471 16:49:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1248395 00:04:57.471 16:49:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:57.471 16:49:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.471 16:49:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1248395 00:04:57.471 16:49:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.731 16:49:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.731 16:49:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.731 16:49:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1248395 00:04:57.731 16:49:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:57.731 16:49:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:57.731 16:49:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:57.731 16:49:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:57.731 SPDK target shutdown done 00:04:57.731 16:49:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:57.731 Success 00:04:57.731 00:04:57.731 real 0m1.402s 00:04:57.731 user 0m1.053s 00:04:57.731 sys 0m0.344s 00:04:57.731 16:49:36 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.731 16:49:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:57.731 ************************************ 00:04:57.731 END TEST json_config_extra_key 00:04:57.731 ************************************ 00:04:57.731 16:49:36 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.731 16:49:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.731 16:49:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.731 16:49:36 -- common/autotest_common.sh@10 -- # set +x 00:04:57.731 ************************************ 00:04:57.731 START TEST alias_rpc 00:04:57.731 ************************************ 00:04:57.731 16:49:36 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.991 * Looking for test storage... 00:04:57.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:57.991 16:49:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:57.992 16:49:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1248753 00:04:57.992 16:49:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1248753 00:04:57.992 16:49:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.992 16:49:36 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 1248753 ']' 00:04:57.992 16:49:36 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.992 16:49:36 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:57.992 16:49:36 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.992 16:49:36 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:57.992 16:49:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.992 [2024-05-15 16:49:36.708633] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:04:57.992 [2024-05-15 16:49:36.708695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248753 ] 00:04:57.992 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.992 [2024-05-15 16:49:36.767847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.252 [2024-05-15 16:49:36.832378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.822 16:49:37 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:58.822 16:49:37 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:58.822 16:49:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:59.082 16:49:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1248753 00:04:59.082 16:49:37 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 1248753 ']' 00:04:59.082 16:49:37 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 1248753 00:04:59.082 16:49:37 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:59.082 16:49:37 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:59.082 16:49:37 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1248753 00:04:59.082 16:49:37 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:59.082 16:49:37 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:59.082 16:49:37 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1248753' 00:04:59.082 killing process with pid 1248753 00:04:59.082 16:49:37 alias_rpc -- common/autotest_common.sh@965 -- # kill 1248753 00:04:59.082 16:49:37 alias_rpc -- common/autotest_common.sh@970 -- # wait 1248753 00:04:59.343 00:04:59.343 real 0m1.376s 00:04:59.343 user 0m1.528s 00:04:59.343 sys 0m0.368s 00:04:59.343 16:49:37 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.343 16:49:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.343 ************************************ 00:04:59.343 END TEST alias_rpc 00:04:59.343 ************************************ 00:04:59.343 16:49:37 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:59.343 16:49:37 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:59.343 16:49:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.343 16:49:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.343 16:49:37 -- common/autotest_common.sh@10 -- # set +x 00:04:59.343 ************************************ 00:04:59.343 START TEST spdkcli_tcp 00:04:59.343 ************************************ 00:04:59.343 16:49:38 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:59.343 * Looking for test storage... 00:04:59.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:59.343 16:49:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:59.344 16:49:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:59.344 16:49:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:59.344 16:49:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:59.344 16:49:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:59.344 16:49:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:59.344 16:49:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:59.344 16:49:38 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:59.344 16:49:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.344 16:49:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1249021 00:04:59.344 16:49:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1249021 00:04:59.344 16:49:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:59.344 16:49:38 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 1249021 ']' 00:04:59.344 16:49:38 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.344 16:49:38 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:59.344 16:49:38 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.344 16:49:38 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:59.344 16:49:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.344 [2024-05-15 16:49:38.170619] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:04:59.344 [2024-05-15 16:49:38.170688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249021 ] 00:04:59.605 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.605 [2024-05-15 16:49:38.237151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.605 [2024-05-15 16:49:38.314430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.605 [2024-05-15 16:49:38.314432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.177 16:49:38 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:00.177 16:49:38 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:00.177 16:49:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1249187 00:05:00.177 16:49:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:00.177 16:49:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:00.439 [ 00:05:00.439 "bdev_malloc_delete", 00:05:00.439 "bdev_malloc_create", 00:05:00.439 "bdev_null_resize", 00:05:00.439 "bdev_null_delete", 00:05:00.439 "bdev_null_create", 00:05:00.439 "bdev_nvme_cuse_unregister", 00:05:00.439 "bdev_nvme_cuse_register", 00:05:00.439 "bdev_opal_new_user", 00:05:00.439 "bdev_opal_set_lock_state", 00:05:00.439 "bdev_opal_delete", 00:05:00.439 "bdev_opal_get_info", 00:05:00.439 "bdev_opal_create", 00:05:00.439 "bdev_nvme_opal_revert", 00:05:00.439 "bdev_nvme_opal_init", 00:05:00.439 "bdev_nvme_send_cmd", 00:05:00.439 "bdev_nvme_get_path_iostat", 00:05:00.439 "bdev_nvme_get_mdns_discovery_info", 00:05:00.439 "bdev_nvme_stop_mdns_discovery", 00:05:00.439 "bdev_nvme_start_mdns_discovery", 00:05:00.439 "bdev_nvme_set_multipath_policy", 00:05:00.439 "bdev_nvme_set_preferred_path", 00:05:00.439 "bdev_nvme_get_io_paths", 00:05:00.439 "bdev_nvme_remove_error_injection", 00:05:00.439 "bdev_nvme_add_error_injection", 00:05:00.439 "bdev_nvme_get_discovery_info", 00:05:00.439 "bdev_nvme_stop_discovery", 00:05:00.439 "bdev_nvme_start_discovery", 00:05:00.439 "bdev_nvme_get_controller_health_info", 00:05:00.439 "bdev_nvme_disable_controller", 00:05:00.439 "bdev_nvme_enable_controller", 00:05:00.439 "bdev_nvme_reset_controller", 00:05:00.439 "bdev_nvme_get_transport_statistics", 00:05:00.439 "bdev_nvme_apply_firmware", 00:05:00.439 "bdev_nvme_detach_controller", 00:05:00.439 "bdev_nvme_get_controllers", 00:05:00.439 "bdev_nvme_attach_controller", 00:05:00.439 "bdev_nvme_set_hotplug", 00:05:00.439 "bdev_nvme_set_options", 00:05:00.439 "bdev_passthru_delete", 00:05:00.439 "bdev_passthru_create", 00:05:00.439 "bdev_lvol_check_shallow_copy", 00:05:00.439 "bdev_lvol_start_shallow_copy", 00:05:00.439 "bdev_lvol_grow_lvstore", 00:05:00.439 "bdev_lvol_get_lvols", 00:05:00.439 "bdev_lvol_get_lvstores", 00:05:00.439 "bdev_lvol_delete", 00:05:00.439 "bdev_lvol_set_read_only", 00:05:00.439 "bdev_lvol_resize", 00:05:00.439 "bdev_lvol_decouple_parent", 00:05:00.439 "bdev_lvol_inflate", 00:05:00.439 "bdev_lvol_rename", 00:05:00.439 "bdev_lvol_clone_bdev", 00:05:00.439 "bdev_lvol_clone", 00:05:00.439 "bdev_lvol_snapshot", 00:05:00.439 "bdev_lvol_create", 00:05:00.439 "bdev_lvol_delete_lvstore", 00:05:00.439 "bdev_lvol_rename_lvstore", 00:05:00.439 "bdev_lvol_create_lvstore", 00:05:00.439 "bdev_raid_set_options", 00:05:00.439 "bdev_raid_remove_base_bdev", 00:05:00.439 "bdev_raid_add_base_bdev", 00:05:00.439 "bdev_raid_delete", 00:05:00.439 "bdev_raid_create", 00:05:00.439 "bdev_raid_get_bdevs", 00:05:00.439 "bdev_error_inject_error", 00:05:00.439 "bdev_error_delete", 00:05:00.439 "bdev_error_create", 00:05:00.439 "bdev_split_delete", 00:05:00.439 "bdev_split_create", 00:05:00.439 "bdev_delay_delete", 00:05:00.439 "bdev_delay_create", 00:05:00.439 "bdev_delay_update_latency", 00:05:00.439 "bdev_zone_block_delete", 00:05:00.439 "bdev_zone_block_create", 00:05:00.439 "blobfs_create", 00:05:00.439 "blobfs_detect", 00:05:00.439 "blobfs_set_cache_size", 00:05:00.439 "bdev_aio_delete", 00:05:00.439 "bdev_aio_rescan", 00:05:00.439 "bdev_aio_create", 00:05:00.439 "bdev_ftl_set_property", 00:05:00.439 "bdev_ftl_get_properties", 00:05:00.439 "bdev_ftl_get_stats", 00:05:00.439 "bdev_ftl_unmap", 00:05:00.439 "bdev_ftl_unload", 00:05:00.439 "bdev_ftl_delete", 00:05:00.439 "bdev_ftl_load", 00:05:00.439 "bdev_ftl_create", 00:05:00.439 "bdev_virtio_attach_controller", 00:05:00.439 "bdev_virtio_scsi_get_devices", 00:05:00.439 "bdev_virtio_detach_controller", 00:05:00.439 "bdev_virtio_blk_set_hotplug", 00:05:00.439 "bdev_iscsi_delete", 00:05:00.439 "bdev_iscsi_create", 00:05:00.439 "bdev_iscsi_set_options", 00:05:00.439 "accel_error_inject_error", 00:05:00.439 "ioat_scan_accel_module", 00:05:00.439 "dsa_scan_accel_module", 00:05:00.439 "iaa_scan_accel_module", 00:05:00.439 "vfu_virtio_create_scsi_endpoint", 00:05:00.439 "vfu_virtio_scsi_remove_target", 00:05:00.439 "vfu_virtio_scsi_add_target", 00:05:00.439 "vfu_virtio_create_blk_endpoint", 00:05:00.439 "vfu_virtio_delete_endpoint", 00:05:00.439 "keyring_file_remove_key", 00:05:00.439 "keyring_file_add_key", 00:05:00.439 "iscsi_get_histogram", 00:05:00.439 "iscsi_enable_histogram", 00:05:00.439 "iscsi_set_options", 00:05:00.439 "iscsi_get_auth_groups", 00:05:00.439 "iscsi_auth_group_remove_secret", 00:05:00.439 "iscsi_auth_group_add_secret", 00:05:00.439 "iscsi_delete_auth_group", 00:05:00.439 "iscsi_create_auth_group", 00:05:00.439 "iscsi_set_discovery_auth", 00:05:00.439 "iscsi_get_options", 00:05:00.439 "iscsi_target_node_request_logout", 00:05:00.439 "iscsi_target_node_set_redirect", 00:05:00.439 "iscsi_target_node_set_auth", 00:05:00.439 "iscsi_target_node_add_lun", 00:05:00.439 "iscsi_get_stats", 00:05:00.439 "iscsi_get_connections", 00:05:00.439 "iscsi_portal_group_set_auth", 00:05:00.439 "iscsi_start_portal_group", 00:05:00.439 "iscsi_delete_portal_group", 00:05:00.439 "iscsi_create_portal_group", 00:05:00.439 "iscsi_get_portal_groups", 00:05:00.439 "iscsi_delete_target_node", 00:05:00.439 "iscsi_target_node_remove_pg_ig_maps", 00:05:00.439 "iscsi_target_node_add_pg_ig_maps", 00:05:00.439 "iscsi_create_target_node", 00:05:00.439 "iscsi_get_target_nodes", 00:05:00.439 "iscsi_delete_initiator_group", 00:05:00.439 "iscsi_initiator_group_remove_initiators", 00:05:00.439 "iscsi_initiator_group_add_initiators", 00:05:00.439 "iscsi_create_initiator_group", 00:05:00.439 "iscsi_get_initiator_groups", 00:05:00.439 "nvmf_set_crdt", 00:05:00.439 "nvmf_set_config", 00:05:00.439 "nvmf_set_max_subsystems", 00:05:00.439 "nvmf_stop_mdns_prr", 00:05:00.439 "nvmf_publish_mdns_prr", 00:05:00.439 "nvmf_subsystem_get_listeners", 00:05:00.439 "nvmf_subsystem_get_qpairs", 00:05:00.439 "nvmf_subsystem_get_controllers", 00:05:00.439 "nvmf_get_stats", 00:05:00.439 "nvmf_get_transports", 00:05:00.439 "nvmf_create_transport", 00:05:00.439 "nvmf_get_targets", 00:05:00.439 "nvmf_delete_target", 00:05:00.439 "nvmf_create_target", 00:05:00.439 "nvmf_subsystem_allow_any_host", 00:05:00.439 "nvmf_subsystem_remove_host", 00:05:00.439 "nvmf_subsystem_add_host", 00:05:00.439 "nvmf_ns_remove_host", 00:05:00.439 "nvmf_ns_add_host", 00:05:00.439 "nvmf_subsystem_remove_ns", 00:05:00.439 "nvmf_subsystem_add_ns", 00:05:00.439 "nvmf_subsystem_listener_set_ana_state", 00:05:00.439 "nvmf_discovery_get_referrals", 00:05:00.439 "nvmf_discovery_remove_referral", 00:05:00.439 "nvmf_discovery_add_referral", 00:05:00.439 "nvmf_subsystem_remove_listener", 00:05:00.439 "nvmf_subsystem_add_listener", 00:05:00.439 "nvmf_delete_subsystem", 00:05:00.439 "nvmf_create_subsystem", 00:05:00.439 "nvmf_get_subsystems", 00:05:00.439 "env_dpdk_get_mem_stats", 00:05:00.439 "nbd_get_disks", 00:05:00.439 "nbd_stop_disk", 00:05:00.439 "nbd_start_disk", 00:05:00.439 "ublk_recover_disk", 00:05:00.439 "ublk_get_disks", 00:05:00.439 "ublk_stop_disk", 00:05:00.439 "ublk_start_disk", 00:05:00.439 "ublk_destroy_target", 00:05:00.439 "ublk_create_target", 00:05:00.439 "virtio_blk_create_transport", 00:05:00.439 "virtio_blk_get_transports", 00:05:00.439 "vhost_controller_set_coalescing", 00:05:00.439 "vhost_get_controllers", 00:05:00.439 "vhost_delete_controller", 00:05:00.439 "vhost_create_blk_controller", 00:05:00.439 "vhost_scsi_controller_remove_target", 00:05:00.439 "vhost_scsi_controller_add_target", 00:05:00.439 "vhost_start_scsi_controller", 00:05:00.439 "vhost_create_scsi_controller", 00:05:00.439 "thread_set_cpumask", 00:05:00.439 "framework_get_scheduler", 00:05:00.439 "framework_set_scheduler", 00:05:00.439 "framework_get_reactors", 00:05:00.439 "thread_get_io_channels", 00:05:00.439 "thread_get_pollers", 00:05:00.439 "thread_get_stats", 00:05:00.439 "framework_monitor_context_switch", 00:05:00.439 "spdk_kill_instance", 00:05:00.439 "log_enable_timestamps", 00:05:00.439 "log_get_flags", 00:05:00.439 "log_clear_flag", 00:05:00.439 "log_set_flag", 00:05:00.439 "log_get_level", 00:05:00.439 "log_set_level", 00:05:00.439 "log_get_print_level", 00:05:00.439 "log_set_print_level", 00:05:00.439 "framework_enable_cpumask_locks", 00:05:00.439 "framework_disable_cpumask_locks", 00:05:00.439 "framework_wait_init", 00:05:00.439 "framework_start_init", 00:05:00.439 "scsi_get_devices", 00:05:00.439 "bdev_get_histogram", 00:05:00.439 "bdev_enable_histogram", 00:05:00.439 "bdev_set_qos_limit", 00:05:00.439 "bdev_set_qd_sampling_period", 00:05:00.439 "bdev_get_bdevs", 00:05:00.439 "bdev_reset_iostat", 00:05:00.439 "bdev_get_iostat", 00:05:00.439 "bdev_examine", 00:05:00.439 "bdev_wait_for_examine", 00:05:00.439 "bdev_set_options", 00:05:00.439 "notify_get_notifications", 00:05:00.439 "notify_get_types", 00:05:00.439 "accel_get_stats", 00:05:00.439 "accel_set_options", 00:05:00.439 "accel_set_driver", 00:05:00.439 "accel_crypto_key_destroy", 00:05:00.439 "accel_crypto_keys_get", 00:05:00.439 "accel_crypto_key_create", 00:05:00.439 "accel_assign_opc", 00:05:00.439 "accel_get_module_info", 00:05:00.439 "accel_get_opc_assignments", 00:05:00.439 "vmd_rescan", 00:05:00.440 "vmd_remove_device", 00:05:00.440 "vmd_enable", 00:05:00.440 "sock_get_default_impl", 00:05:00.440 "sock_set_default_impl", 00:05:00.440 "sock_impl_set_options", 00:05:00.440 "sock_impl_get_options", 00:05:00.440 "iobuf_get_stats", 00:05:00.440 "iobuf_set_options", 00:05:00.440 "keyring_get_keys", 00:05:00.440 "framework_get_pci_devices", 00:05:00.440 "framework_get_config", 00:05:00.440 "framework_get_subsystems", 00:05:00.440 "vfu_tgt_set_base_path", 00:05:00.440 "trace_get_info", 00:05:00.440 "trace_get_tpoint_group_mask", 00:05:00.440 "trace_disable_tpoint_group", 00:05:00.440 "trace_enable_tpoint_group", 00:05:00.440 "trace_clear_tpoint_mask", 00:05:00.440 "trace_set_tpoint_mask", 00:05:00.440 "spdk_get_version", 00:05:00.440 "rpc_get_methods" 00:05:00.440 ] 00:05:00.440 16:49:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.440 16:49:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:00.440 16:49:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1249021 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 1249021 ']' 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 1249021 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1249021 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1249021' 00:05:00.440 killing process with pid 1249021 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 1249021 00:05:00.440 16:49:39 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 1249021 00:05:00.701 00:05:00.701 real 0m1.408s 00:05:00.701 user 0m2.581s 00:05:00.701 sys 0m0.431s 00:05:00.701 16:49:39 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.701 16:49:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.701 ************************************ 00:05:00.701 END TEST spdkcli_tcp 00:05:00.701 ************************************ 00:05:00.701 16:49:39 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.701 16:49:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.701 16:49:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.701 16:49:39 -- common/autotest_common.sh@10 -- # set +x 00:05:00.701 ************************************ 00:05:00.701 START TEST dpdk_mem_utility 00:05:00.701 ************************************ 00:05:00.701 16:49:39 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.961 * Looking for test storage... 00:05:00.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:00.961 16:49:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:00.962 16:49:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1249316 00:05:00.962 16:49:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1249316 00:05:00.962 16:49:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.962 16:49:39 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 1249316 ']' 00:05:00.962 16:49:39 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.962 16:49:39 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:00.962 16:49:39 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.962 16:49:39 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:00.962 16:49:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.962 [2024-05-15 16:49:39.649714] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:00.962 [2024-05-15 16:49:39.649776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249316 ] 00:05:00.962 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.962 [2024-05-15 16:49:39.711319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.962 [2024-05-15 16:49:39.782704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.905 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:01.905 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:01.905 16:49:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:01.905 16:49:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:01.905 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.905 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.905 { 00:05:01.905 "filename": "/tmp/spdk_mem_dump.txt" 00:05:01.905 } 00:05:01.905 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.905 16:49:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:01.905 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:01.905 1 heaps totaling size 814.000000 MiB 00:05:01.905 size: 814.000000 MiB heap id: 0 00:05:01.905 end heaps---------- 00:05:01.905 8 mempools totaling size 598.116089 MiB 00:05:01.905 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:01.905 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:01.905 size: 84.521057 MiB name: bdev_io_1249316 00:05:01.905 size: 51.011292 MiB name: evtpool_1249316 00:05:01.905 size: 50.003479 MiB name: msgpool_1249316 00:05:01.905 size: 21.763794 MiB name: PDU_Pool 00:05:01.905 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:01.905 size: 0.026123 MiB name: Session_Pool 00:05:01.905 end mempools------- 00:05:01.905 6 memzones totaling size 4.142822 MiB 00:05:01.905 size: 1.000366 MiB name: RG_ring_0_1249316 00:05:01.905 size: 1.000366 MiB name: RG_ring_1_1249316 00:05:01.905 size: 1.000366 MiB name: RG_ring_4_1249316 00:05:01.905 size: 1.000366 MiB name: RG_ring_5_1249316 00:05:01.905 size: 0.125366 MiB name: RG_ring_2_1249316 00:05:01.905 size: 0.015991 MiB name: RG_ring_3_1249316 00:05:01.905 end memzones------- 00:05:01.905 16:49:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:01.905 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:01.905 list of free elements. size: 12.519348 MiB 00:05:01.905 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:01.905 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:01.905 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:01.905 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:01.905 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:01.905 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:01.905 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:01.905 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:01.905 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:01.905 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:01.905 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:01.905 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:01.905 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:01.905 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:01.905 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:01.905 list of standard malloc elements. size: 199.218079 MiB 00:05:01.905 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:01.905 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:01.905 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:01.905 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:01.905 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:01.905 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:01.906 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:01.906 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:01.906 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:01.906 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:01.906 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:01.906 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:01.906 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:01.906 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:01.906 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:01.906 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:01.906 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:01.906 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:01.906 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:01.906 list of memzone associated elements. size: 602.262573 MiB 00:05:01.906 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:01.906 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:01.906 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:01.906 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:01.906 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:01.906 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1249316_0 00:05:01.906 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:01.906 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1249316_0 00:05:01.906 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:01.906 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1249316_0 00:05:01.906 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:01.906 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:01.906 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:01.906 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:01.906 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:01.906 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1249316 00:05:01.906 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:01.906 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1249316 00:05:01.906 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:01.906 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1249316 00:05:01.906 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:01.906 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:01.906 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:01.906 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:01.906 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:01.906 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:01.906 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:01.906 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:01.906 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:01.906 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1249316 00:05:01.906 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:01.906 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1249316 00:05:01.906 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:01.906 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1249316 00:05:01.906 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:01.906 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1249316 00:05:01.906 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:01.906 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1249316 00:05:01.906 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:01.906 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:01.906 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:01.906 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:01.906 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:01.906 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:01.906 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:01.906 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1249316 00:05:01.906 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:01.906 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:01.906 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:01.906 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:01.906 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:01.906 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1249316 00:05:01.906 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:01.906 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:01.906 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:01.906 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1249316 00:05:01.906 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:01.906 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1249316 00:05:01.906 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:01.906 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:01.906 16:49:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:01.906 16:49:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1249316 00:05:01.906 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 1249316 ']' 00:05:01.906 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 1249316 00:05:01.906 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:01.906 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:01.906 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1249316 00:05:01.906 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:01.906 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:01.906 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1249316' 00:05:01.906 killing process with pid 1249316 00:05:01.906 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 1249316 00:05:01.906 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 1249316 00:05:02.167 00:05:02.167 real 0m1.292s 00:05:02.167 user 0m1.367s 00:05:02.167 sys 0m0.372s 00:05:02.167 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.167 16:49:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.167 ************************************ 00:05:02.167 END TEST dpdk_mem_utility 00:05:02.167 ************************************ 00:05:02.167 16:49:40 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:02.167 16:49:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.167 16:49:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.167 16:49:40 -- common/autotest_common.sh@10 -- # set +x 00:05:02.167 ************************************ 00:05:02.167 START TEST event 00:05:02.167 ************************************ 00:05:02.167 16:49:40 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:02.167 * Looking for test storage... 00:05:02.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:02.167 16:49:40 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:02.167 16:49:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:02.167 16:49:40 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.167 16:49:40 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:02.167 16:49:40 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.167 16:49:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.428 ************************************ 00:05:02.428 START TEST event_perf 00:05:02.428 ************************************ 00:05:02.428 16:49:41 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.428 Running I/O for 1 seconds...[2024-05-15 16:49:41.026108] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:02.428 [2024-05-15 16:49:41.026207] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249648 ] 00:05:02.428 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.428 [2024-05-15 16:49:41.099501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.428 [2024-05-15 16:49:41.169231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.428 [2024-05-15 16:49:41.169348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.428 [2024-05-15 16:49:41.169504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.428 Running I/O for 1 seconds...[2024-05-15 16:49:41.169504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.426 00:05:03.426 lcore 0: 168711 00:05:03.426 lcore 1: 168711 00:05:03.426 lcore 2: 168706 00:05:03.426 lcore 3: 168709 00:05:03.426 done. 00:05:03.426 00:05:03.426 real 0m1.218s 00:05:03.426 user 0m4.129s 00:05:03.426 sys 0m0.086s 00:05:03.426 16:49:42 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.426 16:49:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.426 ************************************ 00:05:03.426 END TEST event_perf 00:05:03.426 ************************************ 00:05:03.426 16:49:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:03.686 16:49:42 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:03.686 16:49:42 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.686 16:49:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.686 ************************************ 00:05:03.686 START TEST event_reactor 00:05:03.686 ************************************ 00:05:03.686 16:49:42 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:03.686 [2024-05-15 16:49:42.296959] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:03.686 [2024-05-15 16:49:42.296991] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250001 ] 00:05:03.686 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.686 [2024-05-15 16:49:42.347023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.686 [2024-05-15 16:49:42.410879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.628 test_start 00:05:04.628 oneshot 00:05:04.628 tick 100 00:05:04.628 tick 100 00:05:04.628 tick 250 00:05:04.628 tick 100 00:05:04.628 tick 100 00:05:04.628 tick 250 00:05:04.628 tick 100 00:05:04.628 tick 500 00:05:04.628 tick 100 00:05:04.628 tick 100 00:05:04.628 tick 250 00:05:04.628 tick 100 00:05:04.628 tick 100 00:05:04.628 test_end 00:05:04.889 00:05:04.889 real 0m1.174s 00:05:04.889 user 0m1.118s 00:05:04.889 sys 0m0.052s 00:05:04.889 16:49:43 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.889 16:49:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:04.889 ************************************ 00:05:04.889 END TEST event_reactor 00:05:04.889 ************************************ 00:05:04.889 16:49:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.889 16:49:43 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:04.889 16:49:43 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.889 16:49:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.889 ************************************ 00:05:04.889 START TEST event_reactor_perf 00:05:04.889 ************************************ 00:05:04.889 16:49:43 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.889 [2024-05-15 16:49:43.565899] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:04.889 [2024-05-15 16:49:43.565997] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250350 ] 00:05:04.889 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.889 [2024-05-15 16:49:43.627585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.889 [2024-05-15 16:49:43.691085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.275 test_start 00:05:06.275 test_end 00:05:06.275 Performance: 365765 events per second 00:05:06.275 00:05:06.275 real 0m1.198s 00:05:06.275 user 0m1.130s 00:05:06.275 sys 0m0.064s 00:05:06.275 16:49:44 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.275 16:49:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.275 ************************************ 00:05:06.275 END TEST event_reactor_perf 00:05:06.275 ************************************ 00:05:06.275 16:49:44 event -- event/event.sh@49 -- # uname -s 00:05:06.275 16:49:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:06.275 16:49:44 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.275 16:49:44 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.275 16:49:44 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.275 16:49:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.275 ************************************ 00:05:06.275 START TEST event_scheduler 00:05:06.275 ************************************ 00:05:06.275 16:49:44 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.275 * Looking for test storage... 00:05:06.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:06.275 16:49:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:06.275 16:49:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1250620 00:05:06.275 16:49:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.275 16:49:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:06.275 16:49:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1250620 00:05:06.275 16:49:44 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 1250620 ']' 00:05:06.275 16:49:44 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.275 16:49:44 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:06.275 16:49:44 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.275 16:49:44 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:06.275 16:49:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.275 [2024-05-15 16:49:44.984543] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:06.275 [2024-05-15 16:49:44.984619] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250620 ] 00:05:06.275 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.275 [2024-05-15 16:49:45.041491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.275 [2024-05-15 16:49:45.109284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.275 [2024-05-15 16:49:45.109432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.275 [2024-05-15 16:49:45.109554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.275 [2024-05-15 16:49:45.109568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.215 16:49:45 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:07.215 16:49:45 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:07.215 16:49:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:07.215 16:49:45 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.215 16:49:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.215 POWER: Env isn't set yet! 00:05:07.215 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:07.215 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.215 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.215 POWER: Attempting to initialise PSTAT power management... 00:05:07.215 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:07.215 POWER: Initialized successfully for lcore 0 power management 00:05:07.215 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:07.215 POWER: Initialized successfully for lcore 1 power management 00:05:07.215 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:07.215 POWER: Initialized successfully for lcore 2 power management 00:05:07.215 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:07.215 POWER: Initialized successfully for lcore 3 power management 00:05:07.215 16:49:45 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.215 16:49:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:07.215 16:49:45 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.215 16:49:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.215 [2024-05-15 16:49:45.873038] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:07.216 16:49:45 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.216 16:49:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:07.216 16:49:45 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.216 16:49:45 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.216 16:49:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.216 ************************************ 00:05:07.216 START TEST scheduler_create_thread 00:05:07.216 ************************************ 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.216 2 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.216 3 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.216 4 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.216 5 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.216 6 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.216 16:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.216 7 00:05:07.216 16:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.216 16:49:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.216 16:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.216 16:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.216 8 00:05:07.216 16:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.216 16:49:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.216 16:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.216 16:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.787 9 00:05:07.787 16:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.787 16:49:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.787 16:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.787 16:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.171 10 00:05:09.171 16:49:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.171 16:49:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:09.171 16:49:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.171 16:49:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.742 16:49:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.742 16:49:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.742 16:49:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.742 16:49:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.742 16:49:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.686 16:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.686 16:49:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:10.686 16:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.686 16:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.258 16:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.258 16:49:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:11.258 16:49:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:11.258 16:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.258 16:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.831 16:49:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.831 00:05:11.831 real 0m4.467s 00:05:11.831 user 0m0.023s 00:05:11.831 sys 0m0.008s 00:05:11.831 16:49:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.831 16:49:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.831 ************************************ 00:05:11.831 END TEST scheduler_create_thread 00:05:11.831 ************************************ 00:05:11.831 16:49:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:11.831 16:49:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1250620 00:05:11.831 16:49:50 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 1250620 ']' 00:05:11.831 16:49:50 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 1250620 00:05:11.831 16:49:50 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:11.831 16:49:50 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:11.831 16:49:50 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1250620 00:05:11.831 16:49:50 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:11.831 16:49:50 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:11.831 16:49:50 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1250620' 00:05:11.831 killing process with pid 1250620 00:05:11.831 16:49:50 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 1250620 00:05:11.831 16:49:50 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 1250620 00:05:11.831 [2024-05-15 16:49:50.659775] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:12.094 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:12.094 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:12.094 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:12.094 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:12.094 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:12.094 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:12.094 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:12.094 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:12.094 00:05:12.094 real 0m5.990s 00:05:12.094 user 0m14.356s 00:05:12.094 sys 0m0.350s 00:05:12.094 16:49:50 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.094 16:49:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.094 ************************************ 00:05:12.094 END TEST event_scheduler 00:05:12.094 ************************************ 00:05:12.094 16:49:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:12.094 16:49:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:12.094 16:49:50 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.094 16:49:50 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.094 16:49:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.094 ************************************ 00:05:12.094 START TEST app_repeat 00:05:12.094 ************************************ 00:05:12.094 16:49:50 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1251801 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1251801' 00:05:12.094 Process app_repeat pid: 1251801 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:12.094 spdk_app_start Round 0 00:05:12.094 16:49:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1251801 /var/tmp/spdk-nbd.sock 00:05:12.094 16:49:50 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1251801 ']' 00:05:12.094 16:49:50 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.094 16:49:50 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:12.094 16:49:50 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.094 16:49:50 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:12.094 16:49:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.356 [2024-05-15 16:49:50.945511] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:12.356 [2024-05-15 16:49:50.945583] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251801 ] 00:05:12.356 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.356 [2024-05-15 16:49:51.007885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.356 [2024-05-15 16:49:51.074561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.356 [2024-05-15 16:49:51.074576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.356 16:49:51 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.356 16:49:51 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:12.356 16:49:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.617 Malloc0 00:05:12.617 16:49:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.878 Malloc1 00:05:12.878 16:49:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.878 /dev/nbd0 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.878 1+0 records in 00:05:12.878 1+0 records out 00:05:12.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197293 s, 20.8 MB/s 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:12.878 16:49:51 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.878 16:49:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.139 /dev/nbd1 00:05:13.139 16:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.139 16:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.139 16:49:51 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.140 1+0 records in 00:05:13.140 1+0 records out 00:05:13.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306999 s, 13.3 MB/s 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:13.140 16:49:51 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:13.140 16:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.140 16:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.140 16:49:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.140 16:49:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.140 16:49:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.401 { 00:05:13.401 "nbd_device": "/dev/nbd0", 00:05:13.401 "bdev_name": "Malloc0" 00:05:13.401 }, 00:05:13.401 { 00:05:13.401 "nbd_device": "/dev/nbd1", 00:05:13.401 "bdev_name": "Malloc1" 00:05:13.401 } 00:05:13.401 ]' 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.401 { 00:05:13.401 "nbd_device": "/dev/nbd0", 00:05:13.401 "bdev_name": "Malloc0" 00:05:13.401 }, 00:05:13.401 { 00:05:13.401 "nbd_device": "/dev/nbd1", 00:05:13.401 "bdev_name": "Malloc1" 00:05:13.401 } 00:05:13.401 ]' 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.401 /dev/nbd1' 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.401 /dev/nbd1' 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.401 256+0 records in 00:05:13.401 256+0 records out 00:05:13.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124734 s, 84.1 MB/s 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.401 256+0 records in 00:05:13.401 256+0 records out 00:05:13.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0402823 s, 26.0 MB/s 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.401 256+0 records in 00:05:13.401 256+0 records out 00:05:13.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173456 s, 60.5 MB/s 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.401 16:49:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.662 16:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.662 16:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.662 16:49:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.662 16:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.662 16:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.662 16:49:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.662 16:49:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.662 16:49:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.662 16:49:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.662 16:49:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.922 16:49:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.922 16:49:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.183 16:49:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.451 [2024-05-15 16:49:53.037587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.452 [2024-05-15 16:49:53.101667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.452 [2024-05-15 16:49:53.101817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.452 [2024-05-15 16:49:53.133581] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.452 [2024-05-15 16:49:53.133619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.760 16:49:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.760 16:49:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:17.760 spdk_app_start Round 1 00:05:17.760 16:49:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1251801 /var/tmp/spdk-nbd.sock 00:05:17.760 16:49:55 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1251801 ']' 00:05:17.760 16:49:55 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.760 16:49:55 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.760 16:49:55 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.760 16:49:55 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.760 16:49:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:17.760 16:49:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.760 Malloc0 00:05:17.760 16:49:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.760 Malloc1 00:05:17.760 16:49:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.760 /dev/nbd0 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.760 1+0 records in 00:05:17.760 1+0 records out 00:05:17.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208869 s, 19.6 MB/s 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:17.760 16:49:56 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.760 16:49:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.022 /dev/nbd1 00:05:18.022 16:49:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.022 16:49:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.022 1+0 records in 00:05:18.022 1+0 records out 00:05:18.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282983 s, 14.5 MB/s 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:18.022 16:49:56 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:18.022 16:49:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.022 16:49:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.022 16:49:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.022 16:49:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.022 16:49:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.283 { 00:05:18.283 "nbd_device": "/dev/nbd0", 00:05:18.283 "bdev_name": "Malloc0" 00:05:18.283 }, 00:05:18.283 { 00:05:18.283 "nbd_device": "/dev/nbd1", 00:05:18.283 "bdev_name": "Malloc1" 00:05:18.283 } 00:05:18.283 ]' 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.283 { 00:05:18.283 "nbd_device": "/dev/nbd0", 00:05:18.283 "bdev_name": "Malloc0" 00:05:18.283 }, 00:05:18.283 { 00:05:18.283 "nbd_device": "/dev/nbd1", 00:05:18.283 "bdev_name": "Malloc1" 00:05:18.283 } 00:05:18.283 ]' 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.283 /dev/nbd1' 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.283 /dev/nbd1' 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.283 16:49:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.283 256+0 records in 00:05:18.283 256+0 records out 00:05:18.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124753 s, 84.1 MB/s 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.283 256+0 records in 00:05:18.283 256+0 records out 00:05:18.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158316 s, 66.2 MB/s 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.283 256+0 records in 00:05:18.283 256+0 records out 00:05:18.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168436 s, 62.3 MB/s 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.283 16:49:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.543 16:49:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.543 16:49:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.543 16:49:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.543 16:49:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.543 16:49:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.543 16:49:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.543 16:49:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.543 16:49:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.543 16:49:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.543 16:49:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.804 16:49:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.804 16:49:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.063 16:49:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.322 [2024-05-15 16:49:57.927594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.322 [2024-05-15 16:49:57.990679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.322 [2024-05-15 16:49:57.990769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.322 [2024-05-15 16:49:58.023112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.322 [2024-05-15 16:49:58.023151] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.622 16:50:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.622 16:50:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.622 spdk_app_start Round 2 00:05:22.622 16:50:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1251801 /var/tmp/spdk-nbd.sock 00:05:22.622 16:50:00 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1251801 ']' 00:05:22.622 16:50:00 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.623 16:50:00 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:22.623 16:50:00 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.623 16:50:00 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:22.623 16:50:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.623 16:50:00 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.623 16:50:00 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:22.623 16:50:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.623 Malloc0 00:05:22.623 16:50:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.623 Malloc1 00:05:22.623 16:50:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.623 /dev/nbd0 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.623 1+0 records in 00:05:22.623 1+0 records out 00:05:22.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317855 s, 12.9 MB/s 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:22.623 16:50:01 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.623 16:50:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.884 /dev/nbd1 00:05:22.884 16:50:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.884 16:50:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.884 16:50:01 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:22.884 16:50:01 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.885 1+0 records in 00:05:22.885 1+0 records out 00:05:22.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270284 s, 15.2 MB/s 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:22.885 16:50:01 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:22.885 16:50:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.885 16:50:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.885 16:50:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.885 16:50:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.885 16:50:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.146 { 00:05:23.146 "nbd_device": "/dev/nbd0", 00:05:23.146 "bdev_name": "Malloc0" 00:05:23.146 }, 00:05:23.146 { 00:05:23.146 "nbd_device": "/dev/nbd1", 00:05:23.146 "bdev_name": "Malloc1" 00:05:23.146 } 00:05:23.146 ]' 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.146 { 00:05:23.146 "nbd_device": "/dev/nbd0", 00:05:23.146 "bdev_name": "Malloc0" 00:05:23.146 }, 00:05:23.146 { 00:05:23.146 "nbd_device": "/dev/nbd1", 00:05:23.146 "bdev_name": "Malloc1" 00:05:23.146 } 00:05:23.146 ]' 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.146 /dev/nbd1' 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.146 /dev/nbd1' 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.146 256+0 records in 00:05:23.146 256+0 records out 00:05:23.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124864 s, 84.0 MB/s 00:05:23.146 16:50:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.147 256+0 records in 00:05:23.147 256+0 records out 00:05:23.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227662 s, 46.1 MB/s 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.147 256+0 records in 00:05:23.147 256+0 records out 00:05:23.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176361 s, 59.5 MB/s 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.147 16:50:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.407 16:50:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.407 16:50:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.407 16:50:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.407 16:50:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.407 16:50:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.407 16:50:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.407 16:50:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.407 16:50:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.407 16:50:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.407 16:50:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.673 16:50:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.673 16:50:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.937 16:50:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.197 [2024-05-15 16:50:02.786238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.197 [2024-05-15 16:50:02.849317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.197 [2024-05-15 16:50:02.849319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.197 [2024-05-15 16:50:02.881027] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.197 [2024-05-15 16:50:02.881076] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.498 16:50:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1251801 /var/tmp/spdk-nbd.sock 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1251801 ']' 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:27.498 16:50:05 event.app_repeat -- event/event.sh@39 -- # killprocess 1251801 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 1251801 ']' 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 1251801 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1251801 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1251801' 00:05:27.498 killing process with pid 1251801 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@965 -- # kill 1251801 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@970 -- # wait 1251801 00:05:27.498 spdk_app_start is called in Round 0. 00:05:27.498 Shutdown signal received, stop current app iteration 00:05:27.498 Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 reinitialization... 00:05:27.498 spdk_app_start is called in Round 1. 00:05:27.498 Shutdown signal received, stop current app iteration 00:05:27.498 Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 reinitialization... 00:05:27.498 spdk_app_start is called in Round 2. 00:05:27.498 Shutdown signal received, stop current app iteration 00:05:27.498 Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 reinitialization... 00:05:27.498 spdk_app_start is called in Round 3. 00:05:27.498 Shutdown signal received, stop current app iteration 00:05:27.498 16:50:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.498 16:50:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:27.498 00:05:27.498 real 0m15.069s 00:05:27.498 user 0m32.429s 00:05:27.498 sys 0m2.106s 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.498 16:50:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.498 ************************************ 00:05:27.498 END TEST app_repeat 00:05:27.498 ************************************ 00:05:27.498 16:50:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.498 16:50:06 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:27.498 16:50:06 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.498 16:50:06 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.498 16:50:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.498 ************************************ 00:05:27.498 START TEST cpu_locks 00:05:27.498 ************************************ 00:05:27.498 16:50:06 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:27.498 * Looking for test storage... 00:05:27.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:27.498 16:50:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:27.498 16:50:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:27.498 16:50:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:27.498 16:50:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:27.498 16:50:06 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.498 16:50:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.498 16:50:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.498 ************************************ 00:05:27.498 START TEST default_locks 00:05:27.498 ************************************ 00:05:27.498 16:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:27.498 16:50:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1255048 00:05:27.498 16:50:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1255048 00:05:27.498 16:50:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.498 16:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1255048 ']' 00:05:27.498 16:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.498 16:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.498 16:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.498 16:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.498 16:50:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.498 [2024-05-15 16:50:06.255686] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:27.498 [2024-05-15 16:50:06.255740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255048 ] 00:05:27.498 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.498 [2024-05-15 16:50:06.319627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.759 [2024-05-15 16:50:06.393729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.332 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:28.332 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:28.332 16:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1255048 00:05:28.332 16:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1255048 00:05:28.332 16:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.904 lslocks: write error 00:05:28.904 16:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1255048 00:05:28.904 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 1255048 ']' 00:05:28.904 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 1255048 00:05:28.905 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:28.905 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:28.905 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1255048 00:05:28.905 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:28.905 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:28.905 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1255048' 00:05:28.905 killing process with pid 1255048 00:05:28.905 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 1255048 00:05:28.905 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 1255048 00:05:29.165 16:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1255048 00:05:29.165 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:29.165 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1255048 00:05:29.165 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:29.165 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1255048 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1255048 ']' 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1255048) - No such process 00:05:29.166 ERROR: process (pid: 1255048) is no longer running 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.166 00:05:29.166 real 0m1.643s 00:05:29.166 user 0m1.736s 00:05:29.166 sys 0m0.551s 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.166 16:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.166 ************************************ 00:05:29.166 END TEST default_locks 00:05:29.166 ************************************ 00:05:29.166 16:50:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:29.166 16:50:07 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.166 16:50:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.166 16:50:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.166 ************************************ 00:05:29.166 START TEST default_locks_via_rpc 00:05:29.166 ************************************ 00:05:29.166 16:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:29.166 16:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1255421 00:05:29.166 16:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1255421 00:05:29.166 16:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1255421 ']' 00:05:29.166 16:50:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.166 16:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.166 16:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.166 16:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.166 16:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.166 16:50:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.166 [2024-05-15 16:50:07.978865] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:29.166 [2024-05-15 16:50:07.978919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255421 ] 00:05:29.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.427 [2024-05-15 16:50:08.040512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.427 [2024-05-15 16:50:08.113174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1255421 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1255421 00:05:29.997 16:50:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.569 16:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1255421 00:05:30.569 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 1255421 ']' 00:05:30.569 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 1255421 00:05:30.569 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:30.569 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:30.570 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1255421 00:05:30.570 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:30.570 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:30.570 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1255421' 00:05:30.570 killing process with pid 1255421 00:05:30.570 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 1255421 00:05:30.570 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 1255421 00:05:30.830 00:05:30.830 real 0m1.535s 00:05:30.830 user 0m1.627s 00:05:30.830 sys 0m0.528s 00:05:30.830 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.830 16:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.830 ************************************ 00:05:30.830 END TEST default_locks_via_rpc 00:05:30.830 ************************************ 00:05:30.830 16:50:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:30.830 16:50:09 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.830 16:50:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.830 16:50:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.830 ************************************ 00:05:30.830 START TEST non_locking_app_on_locked_coremask 00:05:30.830 ************************************ 00:05:30.830 16:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:30.830 16:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1255791 00:05:30.830 16:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1255791 /var/tmp/spdk.sock 00:05:30.830 16:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.830 16:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1255791 ']' 00:05:30.830 16:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.830 16:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:30.830 16:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.830 16:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:30.830 16:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.830 [2024-05-15 16:50:09.596615] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:30.830 [2024-05-15 16:50:09.596670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255791 ] 00:05:30.830 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.830 [2024-05-15 16:50:09.658828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.091 [2024-05-15 16:50:09.733691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.661 16:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:31.661 16:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:31.661 16:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1256046 00:05:31.661 16:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1256046 /var/tmp/spdk2.sock 00:05:31.661 16:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1256046 ']' 00:05:31.661 16:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:31.661 16:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.661 16:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.661 16:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.661 16:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.661 16:50:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.661 [2024-05-15 16:50:10.412370] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:31.661 [2024-05-15 16:50:10.412423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256046 ] 00:05:31.661 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.920 [2024-05-15 16:50:10.500485] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.920 [2024-05-15 16:50:10.500514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.920 [2024-05-15 16:50:10.629963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.491 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.491 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:32.491 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1255791 00:05:32.491 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1255791 00:05:32.491 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.060 lslocks: write error 00:05:33.060 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1255791 00:05:33.060 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1255791 ']' 00:05:33.060 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1255791 00:05:33.060 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:33.060 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:33.060 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1255791 00:05:33.060 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:33.060 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:33.060 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1255791' 00:05:33.060 killing process with pid 1255791 00:05:33.060 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1255791 00:05:33.060 16:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1255791 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1256046 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1256046 ']' 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1256046 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1256046 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1256046' 00:05:33.631 killing process with pid 1256046 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1256046 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1256046 00:05:33.631 00:05:33.631 real 0m2.903s 00:05:33.631 user 0m3.171s 00:05:33.631 sys 0m0.864s 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.631 16:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.631 ************************************ 00:05:33.631 END TEST non_locking_app_on_locked_coremask 00:05:33.631 ************************************ 00:05:33.892 16:50:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:33.892 16:50:12 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.892 16:50:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.892 16:50:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.892 ************************************ 00:05:33.892 START TEST locking_app_on_unlocked_coremask 00:05:33.892 ************************************ 00:05:33.892 16:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:05:33.892 16:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1256495 00:05:33.892 16:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1256495 /var/tmp/spdk.sock 00:05:33.892 16:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:33.892 16:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1256495 ']' 00:05:33.892 16:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.892 16:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:33.892 16:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.892 16:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:33.892 16:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.892 [2024-05-15 16:50:12.575209] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:33.892 [2024-05-15 16:50:12.575257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256495 ] 00:05:33.892 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.892 [2024-05-15 16:50:12.633371] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.892 [2024-05-15 16:50:12.633398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.892 [2024-05-15 16:50:12.698598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.836 16:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:34.836 16:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:34.836 16:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1256534 00:05:34.836 16:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1256534 /var/tmp/spdk2.sock 00:05:34.836 16:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1256534 ']' 00:05:34.836 16:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.836 16:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.836 16:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:34.836 16:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.836 16:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:34.836 16:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.837 [2024-05-15 16:50:13.384368] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:34.837 [2024-05-15 16:50:13.384421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256534 ] 00:05:34.837 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.837 [2024-05-15 16:50:13.473696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.837 [2024-05-15 16:50:13.599348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.408 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:35.408 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:35.408 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1256534 00:05:35.408 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.408 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1256534 00:05:35.979 lslocks: write error 00:05:35.979 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1256495 00:05:35.979 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1256495 ']' 00:05:35.979 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1256495 00:05:35.979 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:35.979 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:35.979 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1256495 00:05:35.979 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:35.979 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:35.979 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1256495' 00:05:35.979 killing process with pid 1256495 00:05:35.979 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1256495 00:05:35.979 16:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1256495 00:05:36.551 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1256534 00:05:36.551 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1256534 ']' 00:05:36.551 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1256534 00:05:36.551 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:36.551 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.551 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1256534 00:05:36.551 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:36.551 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:36.551 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1256534' 00:05:36.551 killing process with pid 1256534 00:05:36.551 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1256534 00:05:36.551 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1256534 00:05:36.812 00:05:36.812 real 0m2.878s 00:05:36.812 user 0m3.125s 00:05:36.812 sys 0m0.865s 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.812 ************************************ 00:05:36.812 END TEST locking_app_on_unlocked_coremask 00:05:36.812 ************************************ 00:05:36.812 16:50:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:36.812 16:50:15 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.812 16:50:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.812 16:50:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.812 ************************************ 00:05:36.812 START TEST locking_app_on_locked_coremask 00:05:36.812 ************************************ 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1257152 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1257152 /var/tmp/spdk.sock 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1257152 ']' 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.812 16:50:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.812 [2024-05-15 16:50:15.533378] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:36.812 [2024-05-15 16:50:15.533432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257152 ] 00:05:36.812 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.812 [2024-05-15 16:50:15.598108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.073 [2024-05-15 16:50:15.672888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1257214 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1257214 /var/tmp/spdk2.sock 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1257214 /var/tmp/spdk2.sock 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1257214 /var/tmp/spdk2.sock 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1257214 ']' 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:37.645 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.645 [2024-05-15 16:50:16.369166] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:37.645 [2024-05-15 16:50:16.369269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257214 ] 00:05:37.645 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.645 [2024-05-15 16:50:16.461828] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1257152 has claimed it. 00:05:37.645 [2024-05-15 16:50:16.461870] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1257214) - No such process 00:05:38.215 ERROR: process (pid: 1257214) is no longer running 00:05:38.215 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:38.215 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:38.215 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:38.215 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:38.215 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:38.215 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:38.215 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1257152 00:05:38.215 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1257152 00:05:38.215 16:50:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.787 lslocks: write error 00:05:38.787 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1257152 00:05:38.787 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1257152 ']' 00:05:38.787 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1257152 00:05:38.787 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:38.787 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:38.787 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1257152 00:05:38.787 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.787 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.787 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1257152' 00:05:38.787 killing process with pid 1257152 00:05:38.787 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1257152 00:05:38.787 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1257152 00:05:39.049 00:05:39.049 real 0m2.269s 00:05:39.049 user 0m2.520s 00:05:39.049 sys 0m0.627s 00:05:39.049 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.049 16:50:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.049 ************************************ 00:05:39.049 END TEST locking_app_on_locked_coremask 00:05:39.049 ************************************ 00:05:39.049 16:50:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:39.049 16:50:17 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.049 16:50:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.049 16:50:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.049 ************************************ 00:05:39.049 START TEST locking_overlapped_coremask 00:05:39.049 ************************************ 00:05:39.049 16:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:05:39.049 16:50:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1257580 00:05:39.049 16:50:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1257580 /var/tmp/spdk.sock 00:05:39.049 16:50:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:39.049 16:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1257580 ']' 00:05:39.049 16:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.049 16:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:39.049 16:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.049 16:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:39.049 16:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.049 [2024-05-15 16:50:17.876677] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:39.049 [2024-05-15 16:50:17.876724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257580 ] 00:05:39.310 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.310 [2024-05-15 16:50:17.936936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.310 [2024-05-15 16:50:18.002636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.310 [2024-05-15 16:50:18.002848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.310 [2024-05-15 16:50:18.002852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1257765 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1257765 /var/tmp/spdk2.sock 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1257765 /var/tmp/spdk2.sock 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1257765 /var/tmp/spdk2.sock 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1257765 ']' 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:39.882 16:50:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.882 [2024-05-15 16:50:18.699982] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:39.882 [2024-05-15 16:50:18.700035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257765 ] 00:05:40.143 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.143 [2024-05-15 16:50:18.772171] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1257580 has claimed it. 00:05:40.143 [2024-05-15 16:50:18.772202] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1257765) - No such process 00:05:40.715 ERROR: process (pid: 1257765) is no longer running 00:05:40.715 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:40.715 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:40.715 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1257580 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 1257580 ']' 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 1257580 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1257580 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1257580' 00:05:40.716 killing process with pid 1257580 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 1257580 00:05:40.716 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 1257580 00:05:40.978 00:05:40.978 real 0m1.749s 00:05:40.978 user 0m4.975s 00:05:40.978 sys 0m0.359s 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.978 ************************************ 00:05:40.978 END TEST locking_overlapped_coremask 00:05:40.978 ************************************ 00:05:40.978 16:50:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:40.978 16:50:19 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.978 16:50:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.978 16:50:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.978 ************************************ 00:05:40.978 START TEST locking_overlapped_coremask_via_rpc 00:05:40.978 ************************************ 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1257954 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1257954 /var/tmp/spdk.sock 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1257954 ']' 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.978 16:50:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.978 [2024-05-15 16:50:19.701096] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:40.978 [2024-05-15 16:50:19.701141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257954 ] 00:05:40.978 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.978 [2024-05-15 16:50:19.759757] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.978 [2024-05-15 16:50:19.759787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.240 [2024-05-15 16:50:19.825576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.240 [2024-05-15 16:50:19.825655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.240 [2024-05-15 16:50:19.825658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.812 16:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.812 16:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:41.812 16:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1258195 00:05:41.812 16:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1258195 /var/tmp/spdk2.sock 00:05:41.812 16:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1258195 ']' 00:05:41.812 16:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:41.812 16:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.812 16:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.812 16:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.812 16:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.812 16:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.812 [2024-05-15 16:50:20.528392] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:41.813 [2024-05-15 16:50:20.528447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258195 ] 00:05:41.813 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.813 [2024-05-15 16:50:20.599081] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.813 [2024-05-15 16:50:20.599105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.074 [2024-05-15 16:50:20.704609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.074 [2024-05-15 16:50:20.708666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.074 [2024-05-15 16:50:20.708669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.646 [2024-05-15 16:50:21.306602] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1257954 has claimed it. 00:05:42.646 request: 00:05:42.646 { 00:05:42.646 "method": "framework_enable_cpumask_locks", 00:05:42.646 "req_id": 1 00:05:42.646 } 00:05:42.646 Got JSON-RPC error response 00:05:42.646 response: 00:05:42.646 { 00:05:42.646 "code": -32603, 00:05:42.646 "message": "Failed to claim CPU core: 2" 00:05:42.646 } 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1257954 /var/tmp/spdk.sock 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1257954 ']' 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.646 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1258195 /var/tmp/spdk2.sock 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1258195 ']' 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:42.908 00:05:42.908 real 0m2.007s 00:05:42.908 user 0m0.775s 00:05:42.908 sys 0m0.156s 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.908 16:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.908 ************************************ 00:05:42.908 END TEST locking_overlapped_coremask_via_rpc 00:05:42.908 ************************************ 00:05:42.908 16:50:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:42.908 16:50:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1257954 ]] 00:05:42.908 16:50:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1257954 00:05:42.908 16:50:21 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1257954 ']' 00:05:42.908 16:50:21 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1257954 00:05:42.908 16:50:21 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:42.908 16:50:21 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:42.908 16:50:21 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1257954 00:05:43.169 16:50:21 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:43.169 16:50:21 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:43.169 16:50:21 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1257954' 00:05:43.169 killing process with pid 1257954 00:05:43.169 16:50:21 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1257954 00:05:43.169 16:50:21 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1257954 00:05:43.169 16:50:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1258195 ]] 00:05:43.169 16:50:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1258195 00:05:43.169 16:50:21 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1258195 ']' 00:05:43.169 16:50:21 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1258195 00:05:43.169 16:50:21 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:43.169 16:50:21 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:43.169 16:50:21 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1258195 00:05:43.430 16:50:22 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:43.430 16:50:22 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:43.430 16:50:22 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1258195' 00:05:43.430 killing process with pid 1258195 00:05:43.430 16:50:22 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1258195 00:05:43.430 16:50:22 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1258195 00:05:43.430 16:50:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.430 16:50:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:43.430 16:50:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1257954 ]] 00:05:43.430 16:50:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1257954 00:05:43.430 16:50:22 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1257954 ']' 00:05:43.430 16:50:22 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1257954 00:05:43.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1257954) - No such process 00:05:43.430 16:50:22 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1257954 is not found' 00:05:43.430 Process with pid 1257954 is not found 00:05:43.430 16:50:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1258195 ]] 00:05:43.430 16:50:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1258195 00:05:43.430 16:50:22 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1258195 ']' 00:05:43.430 16:50:22 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1258195 00:05:43.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1258195) - No such process 00:05:43.430 16:50:22 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1258195 is not found' 00:05:43.430 Process with pid 1258195 is not found 00:05:43.430 16:50:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.430 00:05:43.430 real 0m16.160s 00:05:43.430 user 0m27.556s 00:05:43.430 sys 0m4.805s 00:05:43.431 16:50:22 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.431 16:50:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.431 ************************************ 00:05:43.431 END TEST cpu_locks 00:05:43.431 ************************************ 00:05:43.431 00:05:43.431 real 0m41.395s 00:05:43.431 user 1m20.931s 00:05:43.431 sys 0m7.844s 00:05:43.431 16:50:22 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.431 16:50:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.431 ************************************ 00:05:43.431 END TEST event 00:05:43.431 ************************************ 00:05:43.730 16:50:22 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:43.730 16:50:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.730 16:50:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.730 16:50:22 -- common/autotest_common.sh@10 -- # set +x 00:05:43.730 ************************************ 00:05:43.730 START TEST thread 00:05:43.730 ************************************ 00:05:43.730 16:50:22 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:43.730 * Looking for test storage... 00:05:43.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:43.730 16:50:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.730 16:50:22 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:43.730 16:50:22 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.730 16:50:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.730 ************************************ 00:05:43.730 START TEST thread_poller_perf 00:05:43.730 ************************************ 00:05:43.730 16:50:22 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.730 [2024-05-15 16:50:22.507269] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:43.730 [2024-05-15 16:50:22.507368] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258719 ] 00:05:43.730 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.026 [2024-05-15 16:50:22.575821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.026 [2024-05-15 16:50:22.649949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.026 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:44.968 ====================================== 00:05:44.968 busy:2410724746 (cyc) 00:05:44.968 total_run_count: 287000 00:05:44.968 tsc_hz: 2400000000 (cyc) 00:05:44.968 ====================================== 00:05:44.968 poller_cost: 8399 (cyc), 3499 (nsec) 00:05:44.968 00:05:44.968 real 0m1.226s 00:05:44.968 user 0m1.137s 00:05:44.968 sys 0m0.084s 00:05:44.968 16:50:23 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.968 16:50:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.968 ************************************ 00:05:44.968 END TEST thread_poller_perf 00:05:44.968 ************************************ 00:05:44.968 16:50:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:44.968 16:50:23 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:44.968 16:50:23 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.968 16:50:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.968 ************************************ 00:05:44.968 START TEST thread_poller_perf 00:05:44.968 ************************************ 00:05:44.968 16:50:23 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.228 [2024-05-15 16:50:23.814728] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:45.228 [2024-05-15 16:50:23.814828] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258951 ] 00:05:45.228 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.228 [2024-05-15 16:50:23.879044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.228 [2024-05-15 16:50:23.945578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.228 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:46.169 ====================================== 00:05:46.169 busy:2401794392 (cyc) 00:05:46.169 total_run_count: 3809000 00:05:46.169 tsc_hz: 2400000000 (cyc) 00:05:46.169 ====================================== 00:05:46.169 poller_cost: 630 (cyc), 262 (nsec) 00:05:46.169 00:05:46.169 real 0m1.205s 00:05:46.169 user 0m1.131s 00:05:46.169 sys 0m0.070s 00:05:46.169 16:50:24 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.169 16:50:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.169 ************************************ 00:05:46.169 END TEST thread_poller_perf 00:05:46.169 ************************************ 00:05:46.430 16:50:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:46.430 00:05:46.430 real 0m2.696s 00:05:46.430 user 0m2.367s 00:05:46.430 sys 0m0.329s 00:05:46.430 16:50:25 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.430 16:50:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.430 ************************************ 00:05:46.430 END TEST thread 00:05:46.430 ************************************ 00:05:46.430 16:50:25 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:46.430 16:50:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.430 16:50:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.430 16:50:25 -- common/autotest_common.sh@10 -- # set +x 00:05:46.430 ************************************ 00:05:46.430 START TEST accel 00:05:46.430 ************************************ 00:05:46.430 16:50:25 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:46.430 * Looking for test storage... 00:05:46.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:46.430 16:50:25 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:46.430 16:50:25 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:46.430 16:50:25 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:46.430 16:50:25 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1259203 00:05:46.430 16:50:25 accel -- accel/accel.sh@63 -- # waitforlisten 1259203 00:05:46.430 16:50:25 accel -- common/autotest_common.sh@827 -- # '[' -z 1259203 ']' 00:05:46.430 16:50:25 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.430 16:50:25 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:46.430 16:50:25 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.430 16:50:25 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:46.430 16:50:25 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:46.430 16:50:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.430 16:50:25 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:46.430 16:50:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.430 16:50:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.430 16:50:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.430 16:50:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.430 16:50:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.430 16:50:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:46.430 16:50:25 accel -- accel/accel.sh@41 -- # jq -r . 00:05:46.691 [2024-05-15 16:50:25.280573] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:46.691 [2024-05-15 16:50:25.280645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259203 ] 00:05:46.691 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.691 [2024-05-15 16:50:25.344552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.691 [2024-05-15 16:50:25.422479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.263 16:50:26 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:47.263 16:50:26 accel -- common/autotest_common.sh@860 -- # return 0 00:05:47.263 16:50:26 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:47.263 16:50:26 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:47.263 16:50:26 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:47.263 16:50:26 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:47.263 16:50:26 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:47.263 16:50:26 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:47.263 16:50:26 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.263 16:50:26 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:47.263 16:50:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.263 16:50:26 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:47.524 16:50:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:47.524 16:50:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:47.524 16:50:26 accel -- accel/accel.sh@75 -- # killprocess 1259203 00:05:47.524 16:50:26 accel -- common/autotest_common.sh@946 -- # '[' -z 1259203 ']' 00:05:47.524 16:50:26 accel -- common/autotest_common.sh@950 -- # kill -0 1259203 00:05:47.524 16:50:26 accel -- common/autotest_common.sh@951 -- # uname 00:05:47.525 16:50:26 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.525 16:50:26 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1259203 00:05:47.525 16:50:26 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:47.525 16:50:26 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:47.525 16:50:26 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1259203' 00:05:47.525 killing process with pid 1259203 00:05:47.525 16:50:26 accel -- common/autotest_common.sh@965 -- # kill 1259203 00:05:47.525 16:50:26 accel -- common/autotest_common.sh@970 -- # wait 1259203 00:05:47.785 16:50:26 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:47.785 16:50:26 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:47.785 16:50:26 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:47.785 16:50:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.785 16:50:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.785 16:50:26 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:47.785 16:50:26 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:47.785 16:50:26 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:47.785 16:50:26 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.785 16:50:26 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.785 16:50:26 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.785 16:50:26 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.785 16:50:26 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.785 16:50:26 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:47.785 16:50:26 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:47.785 16:50:26 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.785 16:50:26 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:47.785 16:50:26 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:47.785 16:50:26 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:47.785 16:50:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.786 16:50:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.786 ************************************ 00:05:47.786 START TEST accel_missing_filename 00:05:47.786 ************************************ 00:05:47.786 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:47.786 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:47.786 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:47.786 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:47.786 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.786 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:47.786 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.786 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:47.786 16:50:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:47.786 16:50:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:47.786 16:50:26 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.786 16:50:26 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.786 16:50:26 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.786 16:50:26 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.786 16:50:26 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.786 16:50:26 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:47.786 16:50:26 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:47.786 [2024-05-15 16:50:26.557279] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:47.786 [2024-05-15 16:50:26.557349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259525 ] 00:05:47.786 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.046 [2024-05-15 16:50:26.620105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.047 [2024-05-15 16:50:26.684787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.047 [2024-05-15 16:50:26.716767] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.047 [2024-05-15 16:50:26.753657] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:48.047 A filename is required. 00:05:48.047 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:48.047 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.047 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:48.047 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:48.047 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:48.047 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.047 00:05:48.047 real 0m0.279s 00:05:48.047 user 0m0.214s 00:05:48.047 sys 0m0.105s 00:05:48.047 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.047 16:50:26 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 ************************************ 00:05:48.047 END TEST accel_missing_filename 00:05:48.047 ************************************ 00:05:48.047 16:50:26 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.047 16:50:26 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:48.047 16:50:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.047 16:50:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.308 ************************************ 00:05:48.308 START TEST accel_compress_verify 00:05:48.308 ************************************ 00:05:48.308 16:50:26 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.308 16:50:26 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:48.308 16:50:26 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.308 16:50:26 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:48.308 16:50:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.308 16:50:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:48.308 16:50:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.308 16:50:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.308 16:50:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.308 16:50:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:48.308 16:50:26 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.308 16:50:26 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.308 16:50:26 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.308 16:50:26 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.308 16:50:26 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.308 16:50:26 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:48.308 16:50:26 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:48.308 [2024-05-15 16:50:26.921407] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:48.308 [2024-05-15 16:50:26.921499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259634 ] 00:05:48.308 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.308 [2024-05-15 16:50:26.984972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.308 [2024-05-15 16:50:27.054204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.308 [2024-05-15 16:50:27.086196] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.308 [2024-05-15 16:50:27.122891] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:48.570 00:05:48.570 Compression does not support the verify option, aborting. 00:05:48.570 16:50:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:48.570 16:50:27 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.570 16:50:27 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:48.570 16:50:27 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:48.570 16:50:27 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:48.570 16:50:27 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.570 00:05:48.570 real 0m0.287s 00:05:48.570 user 0m0.221s 00:05:48.570 sys 0m0.109s 00:05:48.570 16:50:27 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.570 16:50:27 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:48.570 ************************************ 00:05:48.570 END TEST accel_compress_verify 00:05:48.570 ************************************ 00:05:48.570 16:50:27 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:48.570 16:50:27 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:48.570 16:50:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.570 16:50:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.570 ************************************ 00:05:48.570 START TEST accel_wrong_workload 00:05:48.570 ************************************ 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:48.570 16:50:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:48.570 16:50:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:48.570 16:50:27 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.570 16:50:27 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.570 16:50:27 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.570 16:50:27 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.570 16:50:27 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.570 16:50:27 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:48.570 16:50:27 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:48.570 Unsupported workload type: foobar 00:05:48.570 [2024-05-15 16:50:27.285791] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:48.570 accel_perf options: 00:05:48.570 [-h help message] 00:05:48.570 [-q queue depth per core] 00:05:48.570 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:48.570 [-T number of threads per core 00:05:48.570 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:48.570 [-t time in seconds] 00:05:48.570 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:48.570 [ dif_verify, , dif_generate, dif_generate_copy 00:05:48.570 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:48.570 [-l for compress/decompress workloads, name of uncompressed input file 00:05:48.570 [-S for crc32c workload, use this seed value (default 0) 00:05:48.570 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:48.570 [-f for fill workload, use this BYTE value (default 255) 00:05:48.570 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:48.570 [-y verify result if this switch is on] 00:05:48.570 [-a tasks to allocate per core (default: same value as -q)] 00:05:48.570 Can be used to spread operations across a wider range of memory. 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.570 00:05:48.570 real 0m0.035s 00:05:48.570 user 0m0.022s 00:05:48.570 sys 0m0.013s 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.570 16:50:27 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:48.570 ************************************ 00:05:48.570 END TEST accel_wrong_workload 00:05:48.570 ************************************ 00:05:48.570 Error: writing output failed: Broken pipe 00:05:48.570 16:50:27 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:48.570 16:50:27 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:48.570 16:50:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.570 16:50:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.570 ************************************ 00:05:48.570 START TEST accel_negative_buffers 00:05:48.570 ************************************ 00:05:48.570 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:48.570 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:48.570 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:48.570 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:48.570 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.570 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:48.570 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.570 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:48.570 16:50:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:48.570 16:50:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:48.570 16:50:27 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.570 16:50:27 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.570 16:50:27 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.570 16:50:27 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.570 16:50:27 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.570 16:50:27 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:48.570 16:50:27 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:48.570 -x option must be non-negative. 00:05:48.570 [2024-05-15 16:50:27.402892] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:48.831 accel_perf options: 00:05:48.831 [-h help message] 00:05:48.831 [-q queue depth per core] 00:05:48.831 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:48.831 [-T number of threads per core 00:05:48.831 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:48.831 [-t time in seconds] 00:05:48.831 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:48.831 [ dif_verify, , dif_generate, dif_generate_copy 00:05:48.831 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:48.831 [-l for compress/decompress workloads, name of uncompressed input file 00:05:48.831 [-S for crc32c workload, use this seed value (default 0) 00:05:48.831 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:48.831 [-f for fill workload, use this BYTE value (default 255) 00:05:48.831 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:48.831 [-y verify result if this switch is on] 00:05:48.831 [-a tasks to allocate per core (default: same value as -q)] 00:05:48.831 Can be used to spread operations across a wider range of memory. 00:05:48.831 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:48.831 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.831 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.831 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.831 00:05:48.831 real 0m0.038s 00:05:48.831 user 0m0.025s 00:05:48.831 sys 0m0.013s 00:05:48.831 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.831 16:50:27 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:48.831 ************************************ 00:05:48.831 END TEST accel_negative_buffers 00:05:48.831 ************************************ 00:05:48.831 Error: writing output failed: Broken pipe 00:05:48.831 16:50:27 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:48.831 16:50:27 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:48.831 16:50:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.831 16:50:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.831 ************************************ 00:05:48.831 START TEST accel_crc32c 00:05:48.831 ************************************ 00:05:48.831 16:50:27 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:48.831 16:50:27 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:48.831 [2024-05-15 16:50:27.516276] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:48.831 [2024-05-15 16:50:27.516347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259932 ] 00:05:48.831 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.831 [2024-05-15 16:50:27.578409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.831 [2024-05-15 16:50:27.648450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.092 16:50:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.033 16:50:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.034 16:50:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.034 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.034 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.034 16:50:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.034 16:50:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.034 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.034 16:50:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.034 16:50:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.034 16:50:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:50.034 16:50:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.034 00:05:50.034 real 0m1.289s 00:05:50.034 user 0m1.191s 00:05:50.034 sys 0m0.109s 00:05:50.034 16:50:28 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.034 16:50:28 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:50.034 ************************************ 00:05:50.034 END TEST accel_crc32c 00:05:50.034 ************************************ 00:05:50.034 16:50:28 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:50.034 16:50:28 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:50.034 16:50:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.034 16:50:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.034 ************************************ 00:05:50.034 START TEST accel_crc32c_C2 00:05:50.034 ************************************ 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:50.034 16:50:28 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:50.295 [2024-05-15 16:50:28.888410] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:50.295 [2024-05-15 16:50:28.888489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260173 ] 00:05:50.295 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.295 [2024-05-15 16:50:28.960003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.295 [2024-05-15 16:50:29.026878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.295 16:50:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.682 00:05:51.682 real 0m1.299s 00:05:51.682 user 0m1.199s 00:05:51.682 sys 0m0.111s 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.682 16:50:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:51.682 ************************************ 00:05:51.682 END TEST accel_crc32c_C2 00:05:51.682 ************************************ 00:05:51.682 16:50:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:51.682 16:50:30 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:51.682 16:50:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.682 16:50:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.682 ************************************ 00:05:51.682 START TEST accel_copy 00:05:51.682 ************************************ 00:05:51.682 16:50:30 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:51.682 [2024-05-15 16:50:30.269227] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:51.682 [2024-05-15 16:50:30.269295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260372 ] 00:05:51.682 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.682 [2024-05-15 16:50:30.331658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.682 [2024-05-15 16:50:30.402418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:51.682 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.683 16:50:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.066 16:50:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.066 16:50:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.066 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.066 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.066 16:50:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.066 16:50:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.066 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.066 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:53.067 16:50:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.067 00:05:53.067 real 0m1.291s 00:05:53.067 user 0m1.193s 00:05:53.067 sys 0m0.107s 00:05:53.067 16:50:31 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.067 16:50:31 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:53.067 ************************************ 00:05:53.067 END TEST accel_copy 00:05:53.067 ************************************ 00:05:53.067 16:50:31 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.067 16:50:31 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:53.067 16:50:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.067 16:50:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.067 ************************************ 00:05:53.067 START TEST accel_fill 00:05:53.067 ************************************ 00:05:53.067 16:50:31 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:53.067 [2024-05-15 16:50:31.639120] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:53.067 [2024-05-15 16:50:31.639185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260671 ] 00:05:53.067 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.067 [2024-05-15 16:50:31.709922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.067 [2024-05-15 16:50:31.776750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:53.067 16:50:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:54.447 16:50:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.447 00:05:54.447 real 0m1.293s 00:05:54.447 user 0m1.206s 00:05:54.447 sys 0m0.099s 00:05:54.447 16:50:32 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.447 16:50:32 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:54.447 ************************************ 00:05:54.447 END TEST accel_fill 00:05:54.447 ************************************ 00:05:54.447 16:50:32 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:54.447 16:50:32 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:54.447 16:50:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.447 16:50:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.447 ************************************ 00:05:54.447 START TEST accel_copy_crc32c 00:05:54.447 ************************************ 00:05:54.447 16:50:32 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:54.447 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:54.448 16:50:32 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:54.448 [2024-05-15 16:50:33.013220] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:54.448 [2024-05-15 16:50:33.013296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261026 ] 00:05:54.448 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.448 [2024-05-15 16:50:33.073123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.448 [2024-05-15 16:50:33.137191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.448 16:50:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.833 00:05:55.833 real 0m1.280s 00:05:55.833 user 0m1.187s 00:05:55.833 sys 0m0.105s 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.833 16:50:34 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:55.833 ************************************ 00:05:55.833 END TEST accel_copy_crc32c 00:05:55.833 ************************************ 00:05:55.833 16:50:34 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:55.833 16:50:34 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:55.833 16:50:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.833 16:50:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.833 ************************************ 00:05:55.833 START TEST accel_copy_crc32c_C2 00:05:55.833 ************************************ 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:55.833 [2024-05-15 16:50:34.377104] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:55.833 [2024-05-15 16:50:34.377163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261374 ] 00:05:55.833 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.833 [2024-05-15 16:50:34.438934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.833 [2024-05-15 16:50:34.506472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.833 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.834 16:50:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.218 00:05:57.218 real 0m1.287s 00:05:57.218 user 0m1.193s 00:05:57.218 sys 0m0.106s 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.218 16:50:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:57.218 ************************************ 00:05:57.218 END TEST accel_copy_crc32c_C2 00:05:57.218 ************************************ 00:05:57.218 16:50:35 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:57.218 16:50:35 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:57.218 16:50:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.218 16:50:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.218 ************************************ 00:05:57.218 START TEST accel_dualcast 00:05:57.218 ************************************ 00:05:57.218 16:50:35 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:57.218 [2024-05-15 16:50:35.746946] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:57.218 [2024-05-15 16:50:35.747038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261667 ] 00:05:57.218 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.218 [2024-05-15 16:50:35.810334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.218 [2024-05-15 16:50:35.881712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.218 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:57.219 16:50:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:58.602 16:50:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.603 16:50:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:58.603 16:50:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.603 00:05:58.603 real 0m1.293s 00:05:58.603 user 0m1.190s 00:05:58.603 sys 0m0.113s 00:05:58.603 16:50:37 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.603 16:50:37 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:58.603 ************************************ 00:05:58.603 END TEST accel_dualcast 00:05:58.603 ************************************ 00:05:58.603 16:50:37 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:58.603 16:50:37 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:58.603 16:50:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.603 16:50:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.603 ************************************ 00:05:58.603 START TEST accel_compare 00:05:58.603 ************************************ 00:05:58.603 16:50:37 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:58.603 [2024-05-15 16:50:37.123448] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:58.603 [2024-05-15 16:50:37.123561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261855 ] 00:05:58.603 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.603 [2024-05-15 16:50:37.187867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.603 [2024-05-15 16:50:37.257784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:58.603 16:50:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:59.986 16:50:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.986 00:05:59.986 real 0m1.292s 00:05:59.986 user 0m1.205s 00:05:59.986 sys 0m0.098s 00:05:59.986 16:50:38 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.986 16:50:38 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:59.986 ************************************ 00:05:59.986 END TEST accel_compare 00:05:59.986 ************************************ 00:05:59.986 16:50:38 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:59.986 16:50:38 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:59.986 16:50:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.986 16:50:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.986 ************************************ 00:05:59.986 START TEST accel_xor 00:05:59.986 ************************************ 00:05:59.986 16:50:38 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:59.986 16:50:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:59.986 16:50:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:59.986 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.986 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.986 16:50:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:59.986 16:50:38 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:59.986 16:50:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:59.986 16:50:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:59.987 [2024-05-15 16:50:38.497785] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:05:59.987 [2024-05-15 16:50:38.497848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262116 ] 00:05:59.987 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.987 [2024-05-15 16:50:38.558242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.987 [2024-05-15 16:50:38.624169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.987 16:50:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.928 16:50:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:00.929 16:50:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.929 00:06:00.929 real 0m1.284s 00:06:00.929 user 0m1.195s 00:06:00.929 sys 0m0.098s 00:06:00.929 16:50:39 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.929 16:50:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:00.929 ************************************ 00:06:00.929 END TEST accel_xor 00:06:00.929 ************************************ 00:06:01.190 16:50:39 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:01.190 16:50:39 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:01.190 16:50:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.190 16:50:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.190 ************************************ 00:06:01.190 START TEST accel_xor 00:06:01.190 ************************************ 00:06:01.190 16:50:39 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:01.190 16:50:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:01.190 [2024-05-15 16:50:39.864133] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:01.190 [2024-05-15 16:50:39.864195] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262465 ] 00:06:01.190 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.190 [2024-05-15 16:50:39.926738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.190 [2024-05-15 16:50:39.996991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:01.450 16:50:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.391 16:50:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.391 16:50:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.391 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.391 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.391 16:50:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.391 16:50:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.391 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.391 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:02.392 16:50:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.392 00:06:02.392 real 0m1.286s 00:06:02.392 user 0m0.005s 00:06:02.392 sys 0m0.002s 00:06:02.392 16:50:41 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.392 16:50:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 ************************************ 00:06:02.392 END TEST accel_xor 00:06:02.392 ************************************ 00:06:02.392 16:50:41 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:02.392 16:50:41 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:02.392 16:50:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.392 16:50:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 ************************************ 00:06:02.392 START TEST accel_dif_verify 00:06:02.392 ************************************ 00:06:02.392 16:50:41 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:02.392 16:50:41 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:02.652 [2024-05-15 16:50:41.230295] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:02.652 [2024-05-15 16:50:41.230386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262821 ] 00:06:02.652 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.652 [2024-05-15 16:50:41.291514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.652 [2024-05-15 16:50:41.355249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.652 16:50:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:04.036 16:50:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.036 00:06:04.036 real 0m1.278s 00:06:04.036 user 0m1.179s 00:06:04.036 sys 0m0.101s 00:06:04.036 16:50:42 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.036 16:50:42 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:04.036 ************************************ 00:06:04.036 END TEST accel_dif_verify 00:06:04.036 ************************************ 00:06:04.036 16:50:42 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:04.036 16:50:42 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:04.036 16:50:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.036 16:50:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.036 ************************************ 00:06:04.036 START TEST accel_dif_generate 00:06:04.036 ************************************ 00:06:04.036 16:50:42 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:04.036 [2024-05-15 16:50:42.583494] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:04.036 [2024-05-15 16:50:42.583564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263145 ] 00:06:04.036 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.036 [2024-05-15 16:50:42.645176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.036 [2024-05-15 16:50:42.712616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.036 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.037 16:50:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:05.418 16:50:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.418 00:06:05.418 real 0m1.281s 00:06:05.418 user 0m1.184s 00:06:05.418 sys 0m0.097s 00:06:05.418 16:50:43 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.418 16:50:43 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:05.418 ************************************ 00:06:05.418 END TEST accel_dif_generate 00:06:05.418 ************************************ 00:06:05.418 16:50:43 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:05.418 16:50:43 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:05.419 16:50:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.419 16:50:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 ************************************ 00:06:05.419 START TEST accel_dif_generate_copy 00:06:05.419 ************************************ 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:05.419 16:50:43 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:05.419 [2024-05-15 16:50:43.940168] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:05.419 [2024-05-15 16:50:43.940232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263320 ] 00:06:05.419 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.419 [2024-05-15 16:50:44.002612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.419 [2024-05-15 16:50:44.072092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.419 16:50:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.388 00:06:06.388 real 0m1.283s 00:06:06.388 user 0m0.005s 00:06:06.388 sys 0m0.000s 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.388 16:50:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:06.388 ************************************ 00:06:06.388 END TEST accel_dif_generate_copy 00:06:06.388 ************************************ 00:06:06.672 16:50:45 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:06.672 16:50:45 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.672 16:50:45 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:06.672 16:50:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.672 16:50:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.672 ************************************ 00:06:06.672 START TEST accel_comp 00:06:06.672 ************************************ 00:06:06.672 16:50:45 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:06.672 [2024-05-15 16:50:45.300927] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:06.672 [2024-05-15 16:50:45.300987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263563 ] 00:06:06.672 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.672 [2024-05-15 16:50:45.362913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.672 [2024-05-15 16:50:45.429948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:06.672 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.673 16:50:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:08.056 16:50:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.056 00:06:08.056 real 0m1.283s 00:06:08.056 user 0m0.006s 00:06:08.056 sys 0m0.000s 00:06:08.056 16:50:46 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.056 16:50:46 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:08.056 ************************************ 00:06:08.056 END TEST accel_comp 00:06:08.056 ************************************ 00:06:08.056 16:50:46 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:08.056 16:50:46 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:08.056 16:50:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.056 16:50:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.056 ************************************ 00:06:08.056 START TEST accel_decomp 00:06:08.056 ************************************ 00:06:08.056 16:50:46 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:08.056 16:50:46 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:08.056 16:50:46 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:08.056 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.056 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.056 16:50:46 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:08.056 16:50:46 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:08.057 [2024-05-15 16:50:46.658385] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:08.057 [2024-05-15 16:50:46.658445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263910 ] 00:06:08.057 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.057 [2024-05-15 16:50:46.718878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.057 [2024-05-15 16:50:46.784532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:08.057 16:50:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.438 16:50:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.438 00:06:09.438 real 0m1.279s 00:06:09.438 user 0m1.183s 00:06:09.438 sys 0m0.096s 00:06:09.438 16:50:47 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.438 16:50:47 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:09.438 ************************************ 00:06:09.439 END TEST accel_decomp 00:06:09.439 ************************************ 00:06:09.439 16:50:47 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:09.439 16:50:47 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:09.439 16:50:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.439 16:50:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.439 ************************************ 00:06:09.439 START TEST accel_decmop_full 00:06:09.439 ************************************ 00:06:09.439 16:50:47 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:09.439 16:50:47 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:09.439 [2024-05-15 16:50:48.017033] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:09.439 [2024-05-15 16:50:48.017121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264260 ] 00:06:09.439 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.439 [2024-05-15 16:50:48.078506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.439 [2024-05-15 16:50:48.142086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.439 16:50:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:10.846 16:50:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.846 00:06:10.846 real 0m1.291s 00:06:10.846 user 0m1.198s 00:06:10.846 sys 0m0.094s 00:06:10.846 16:50:49 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.846 16:50:49 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:10.846 ************************************ 00:06:10.846 END TEST accel_decmop_full 00:06:10.846 ************************************ 00:06:10.846 16:50:49 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:10.846 16:50:49 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:10.846 16:50:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.846 16:50:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.846 ************************************ 00:06:10.846 START TEST accel_decomp_mcore 00:06:10.846 ************************************ 00:06:10.846 16:50:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:10.846 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:10.846 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:10.846 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.846 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.846 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:10.847 [2024-05-15 16:50:49.386002] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:10.847 [2024-05-15 16:50:49.386082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264613 ] 00:06:10.847 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.847 [2024-05-15 16:50:49.457575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.847 [2024-05-15 16:50:49.529283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.847 [2024-05-15 16:50:49.529399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.847 [2024-05-15 16:50:49.529574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.847 [2024-05-15 16:50:49.529599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.847 16:50:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.228 00:06:12.228 real 0m1.311s 00:06:12.228 user 0m4.449s 00:06:12.228 sys 0m0.110s 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.228 16:50:50 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:12.228 ************************************ 00:06:12.228 END TEST accel_decomp_mcore 00:06:12.228 ************************************ 00:06:12.228 16:50:50 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.228 16:50:50 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:12.228 16:50:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.228 16:50:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.228 ************************************ 00:06:12.228 START TEST accel_decomp_full_mcore 00:06:12.228 ************************************ 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:12.228 [2024-05-15 16:50:50.781247] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:12.228 [2024-05-15 16:50:50.781309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264823 ] 00:06:12.228 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.228 [2024-05-15 16:50:50.844341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.228 [2024-05-15 16:50:50.919389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.228 [2024-05-15 16:50:50.919529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.228 [2024-05-15 16:50:50.919668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.228 [2024-05-15 16:50:50.919668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.228 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.229 16:50:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.612 00:06:13.612 real 0m1.321s 00:06:13.612 user 0m4.498s 00:06:13.612 sys 0m0.117s 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.612 16:50:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:13.612 ************************************ 00:06:13.612 END TEST accel_decomp_full_mcore 00:06:13.612 ************************************ 00:06:13.612 16:50:52 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:13.612 16:50:52 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:13.612 16:50:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.612 16:50:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.612 ************************************ 00:06:13.612 START TEST accel_decomp_mthread 00:06:13.612 ************************************ 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:13.612 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:13.612 [2024-05-15 16:50:52.181947] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:13.612 [2024-05-15 16:50:52.182033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265036 ] 00:06:13.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.612 [2024-05-15 16:50:52.243398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.613 [2024-05-15 16:50:52.311127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:13.613 16:50:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.997 00:06:14.997 real 0m1.292s 00:06:14.997 user 0m1.191s 00:06:14.997 sys 0m0.113s 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.997 16:50:53 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:14.997 ************************************ 00:06:14.997 END TEST accel_decomp_mthread 00:06:14.997 ************************************ 00:06:14.997 16:50:53 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.997 16:50:53 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:14.997 16:50:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.997 16:50:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.997 ************************************ 00:06:14.997 START TEST accel_decomp_full_mthread 00:06:14.997 ************************************ 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:14.997 [2024-05-15 16:50:53.552747] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:14.997 [2024-05-15 16:50:53.552807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265361 ] 00:06:14.997 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.997 [2024-05-15 16:50:53.612772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.997 [2024-05-15 16:50:53.677286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.997 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.998 16:50:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.383 00:06:16.383 real 0m1.314s 00:06:16.383 user 0m1.235s 00:06:16.383 sys 0m0.091s 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.383 16:50:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:16.383 ************************************ 00:06:16.383 END TEST accel_decomp_full_mthread 00:06:16.383 ************************************ 00:06:16.383 16:50:54 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:16.383 16:50:54 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:16.383 16:50:54 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:16.383 16:50:54 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:16.383 16:50:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.383 16:50:54 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.383 16:50:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.383 16:50:54 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.383 16:50:54 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.383 16:50:54 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.383 16:50:54 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.383 16:50:54 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:16.383 16:50:54 accel -- accel/accel.sh@41 -- # jq -r . 00:06:16.383 ************************************ 00:06:16.383 START TEST accel_dif_functional_tests 00:06:16.383 ************************************ 00:06:16.383 16:50:54 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:16.383 [2024-05-15 16:50:54.969372] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:16.383 [2024-05-15 16:50:54.969419] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265713 ] 00:06:16.383 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.383 [2024-05-15 16:50:55.030229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.383 [2024-05-15 16:50:55.101751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.383 [2024-05-15 16:50:55.101879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.383 [2024-05-15 16:50:55.101882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.383 00:06:16.383 00:06:16.383 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.383 http://cunit.sourceforge.net/ 00:06:16.383 00:06:16.383 00:06:16.383 Suite: accel_dif 00:06:16.383 Test: verify: DIF generated, GUARD check ...passed 00:06:16.383 Test: verify: DIF generated, APPTAG check ...passed 00:06:16.383 Test: verify: DIF generated, REFTAG check ...passed 00:06:16.383 Test: verify: DIF not generated, GUARD check ...[2024-05-15 16:50:55.157381] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:16.383 [2024-05-15 16:50:55.157418] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:16.383 passed 00:06:16.383 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 16:50:55.157448] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:16.383 [2024-05-15 16:50:55.157463] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:16.383 passed 00:06:16.383 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 16:50:55.157479] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:16.383 [2024-05-15 16:50:55.157494] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:16.383 passed 00:06:16.383 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:16.383 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 16:50:55.157539] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:16.383 passed 00:06:16.383 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:16.383 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:16.383 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:16.383 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 16:50:55.157659] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:16.383 passed 00:06:16.383 Test: generate copy: DIF generated, GUARD check ...passed 00:06:16.383 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:16.383 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:16.383 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:16.383 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:16.383 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:16.383 Test: generate copy: iovecs-len validate ...[2024-05-15 16:50:55.157846] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:16.383 passed 00:06:16.383 Test: generate copy: buffer alignment validate ...passed 00:06:16.383 00:06:16.383 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.383 suites 1 1 n/a 0 0 00:06:16.383 tests 20 20 20 0 0 00:06:16.383 asserts 204 204 204 0 n/a 00:06:16.383 00:06:16.383 Elapsed time = 0.002 seconds 00:06:16.645 00:06:16.645 real 0m0.353s 00:06:16.645 user 0m0.452s 00:06:16.645 sys 0m0.120s 00:06:16.645 16:50:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.645 16:50:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:16.645 ************************************ 00:06:16.645 END TEST accel_dif_functional_tests 00:06:16.645 ************************************ 00:06:16.645 00:06:16.645 real 0m30.192s 00:06:16.645 user 0m33.623s 00:06:16.645 sys 0m4.167s 00:06:16.645 16:50:55 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.645 16:50:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.645 ************************************ 00:06:16.645 END TEST accel 00:06:16.645 ************************************ 00:06:16.645 16:50:55 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:16.645 16:50:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:16.645 16:50:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.645 16:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:16.645 ************************************ 00:06:16.645 START TEST accel_rpc 00:06:16.645 ************************************ 00:06:16.645 16:50:55 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:16.645 * Looking for test storage... 00:06:16.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:16.645 16:50:55 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.645 16:50:55 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1266001 00:06:16.645 16:50:55 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1266001 00:06:16.645 16:50:55 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:16.645 16:50:55 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 1266001 ']' 00:06:16.645 16:50:55 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.645 16:50:55 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:16.645 16:50:55 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.645 16:50:55 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:16.645 16:50:55 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.906 [2024-05-15 16:50:55.518710] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:16.906 [2024-05-15 16:50:55.518785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266001 ] 00:06:16.906 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.906 [2024-05-15 16:50:55.584939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.906 [2024-05-15 16:50:55.663021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.476 16:50:56 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.476 16:50:56 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:17.476 16:50:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:17.476 16:50:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:17.476 16:50:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:17.476 16:50:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:17.476 16:50:56 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:17.476 16:50:56 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.476 16:50:56 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.476 16:50:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.476 ************************************ 00:06:17.476 START TEST accel_assign_opcode 00:06:17.476 ************************************ 00:06:17.476 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:17.476 16:50:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:17.476 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.476 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:17.476 [2024-05-15 16:50:56.300913] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:17.476 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.476 16:50:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:17.476 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.476 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:17.736 [2024-05-15 16:50:56.312938] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.736 software 00:06:17.736 00:06:17.736 real 0m0.215s 00:06:17.736 user 0m0.051s 00:06:17.736 sys 0m0.010s 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.736 16:50:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:17.736 ************************************ 00:06:17.736 END TEST accel_assign_opcode 00:06:17.736 ************************************ 00:06:17.736 16:50:56 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1266001 00:06:17.736 16:50:56 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 1266001 ']' 00:06:17.736 16:50:56 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 1266001 00:06:17.736 16:50:56 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:17.736 16:50:56 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.736 16:50:56 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1266001 00:06:17.996 16:50:56 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.996 16:50:56 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.996 16:50:56 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1266001' 00:06:17.996 killing process with pid 1266001 00:06:17.996 16:50:56 accel_rpc -- common/autotest_common.sh@965 -- # kill 1266001 00:06:17.996 16:50:56 accel_rpc -- common/autotest_common.sh@970 -- # wait 1266001 00:06:17.996 00:06:17.996 real 0m1.449s 00:06:17.996 user 0m11.617s 00:06:17.996 sys 0m5.359s 00:06:17.996 16:50:56 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.996 16:50:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.996 ************************************ 00:06:17.996 END TEST accel_rpc 00:06:17.996 ************************************ 00:06:18.257 16:50:56 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:18.257 16:50:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.257 16:50:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.257 16:50:56 -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 ************************************ 00:06:18.257 START TEST app_cmdline 00:06:18.257 ************************************ 00:06:18.257 16:50:56 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:18.257 * Looking for test storage... 00:06:18.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:18.257 16:50:56 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:18.257 16:50:56 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1266651 00:06:18.257 16:50:56 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1266651 00:06:18.257 16:50:56 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:18.257 16:50:56 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 1266651 ']' 00:06:18.257 16:50:56 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.257 16:50:56 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.257 16:50:56 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.257 16:50:56 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.257 16:50:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 [2024-05-15 16:50:57.007394] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:18.257 [2024-05-15 16:50:57.007458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266651 ] 00:06:18.257 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.257 [2024-05-15 16:50:57.071660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.518 [2024-05-15 16:50:57.144724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.086 16:50:57 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.086 16:50:57 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:19.086 16:50:57 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:19.347 { 00:06:19.347 "version": "SPDK v24.05-pre git sha1 c7a82f3a8", 00:06:19.347 "fields": { 00:06:19.347 "major": 24, 00:06:19.347 "minor": 5, 00:06:19.347 "patch": 0, 00:06:19.347 "suffix": "-pre", 00:06:19.347 "commit": "c7a82f3a8" 00:06:19.347 } 00:06:19.347 } 00:06:19.347 16:50:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:19.347 16:50:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:19.347 16:50:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:19.347 16:50:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:19.347 16:50:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:19.347 16:50:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:19.347 16:50:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:19.347 16:50:57 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.347 16:50:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.347 16:50:57 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.347 16:50:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:19.347 16:50:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:19.347 16:50:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.347 16:50:57 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:19.347 16:50:57 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.347 16:50:57 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.347 16:50:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.347 16:50:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.347 16:50:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.347 request: 00:06:19.347 { 00:06:19.347 "method": "env_dpdk_get_mem_stats", 00:06:19.347 "req_id": 1 00:06:19.347 } 00:06:19.347 Got JSON-RPC error response 00:06:19.347 response: 00:06:19.347 { 00:06:19.347 "code": -32601, 00:06:19.347 "message": "Method not found" 00:06:19.347 } 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.347 16:50:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1266651 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 1266651 ']' 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 1266651 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:19.347 16:50:58 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1266651 00:06:19.607 16:50:58 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:19.607 16:50:58 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:19.607 16:50:58 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1266651' 00:06:19.607 killing process with pid 1266651 00:06:19.607 16:50:58 app_cmdline -- common/autotest_common.sh@965 -- # kill 1266651 00:06:19.607 16:50:58 app_cmdline -- common/autotest_common.sh@970 -- # wait 1266651 00:06:19.607 00:06:19.607 real 0m1.572s 00:06:19.607 user 0m1.928s 00:06:19.607 sys 0m0.384s 00:06:19.607 16:50:58 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.607 16:50:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.607 ************************************ 00:06:19.607 END TEST app_cmdline 00:06:19.607 ************************************ 00:06:19.867 16:50:58 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:19.867 16:50:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.867 16:50:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.867 16:50:58 -- common/autotest_common.sh@10 -- # set +x 00:06:19.867 ************************************ 00:06:19.867 START TEST version 00:06:19.867 ************************************ 00:06:19.867 16:50:58 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:19.867 * Looking for test storage... 00:06:19.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:19.867 16:50:58 version -- app/version.sh@17 -- # get_header_version major 00:06:19.867 16:50:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.867 16:50:58 version -- app/version.sh@14 -- # cut -f2 00:06:19.867 16:50:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.867 16:50:58 version -- app/version.sh@17 -- # major=24 00:06:19.867 16:50:58 version -- app/version.sh@18 -- # get_header_version minor 00:06:19.867 16:50:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.867 16:50:58 version -- app/version.sh@14 -- # cut -f2 00:06:19.867 16:50:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.867 16:50:58 version -- app/version.sh@18 -- # minor=5 00:06:19.867 16:50:58 version -- app/version.sh@19 -- # get_header_version patch 00:06:19.867 16:50:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.867 16:50:58 version -- app/version.sh@14 -- # cut -f2 00:06:19.867 16:50:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.867 16:50:58 version -- app/version.sh@19 -- # patch=0 00:06:19.867 16:50:58 version -- app/version.sh@20 -- # get_header_version suffix 00:06:19.867 16:50:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.867 16:50:58 version -- app/version.sh@14 -- # cut -f2 00:06:19.867 16:50:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.867 16:50:58 version -- app/version.sh@20 -- # suffix=-pre 00:06:19.867 16:50:58 version -- app/version.sh@22 -- # version=24.5 00:06:19.867 16:50:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:19.867 16:50:58 version -- app/version.sh@28 -- # version=24.5rc0 00:06:19.867 16:50:58 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:19.867 16:50:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:19.867 16:50:58 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:19.867 16:50:58 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:19.867 00:06:19.867 real 0m0.172s 00:06:19.867 user 0m0.084s 00:06:19.867 sys 0m0.126s 00:06:19.867 16:50:58 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.867 16:50:58 version -- common/autotest_common.sh@10 -- # set +x 00:06:19.867 ************************************ 00:06:19.867 END TEST version 00:06:19.867 ************************************ 00:06:19.867 16:50:58 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:19.867 16:50:58 -- spdk/autotest.sh@194 -- # uname -s 00:06:19.867 16:50:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:19.867 16:50:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.867 16:50:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.867 16:50:58 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:19.867 16:50:58 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:19.867 16:50:58 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:19.867 16:50:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.867 16:50:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.128 16:50:58 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:20.128 16:50:58 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:20.128 16:50:58 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:06:20.128 16:50:58 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:06:20.128 16:50:58 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:06:20.128 16:50:58 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:06:20.128 16:50:58 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:20.128 16:50:58 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:20.128 16:50:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.128 16:50:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.128 ************************************ 00:06:20.128 START TEST nvmf_tcp 00:06:20.128 ************************************ 00:06:20.128 16:50:58 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:20.128 * Looking for test storage... 00:06:20.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.128 16:50:58 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.129 16:50:58 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.129 16:50:58 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.129 16:50:58 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.129 16:50:58 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.129 16:50:58 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.129 16:50:58 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.129 16:50:58 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:20.129 16:50:58 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:20.129 16:50:58 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:20.129 16:50:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:20.129 16:50:58 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:20.129 16:50:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:20.129 16:50:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.129 16:50:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.129 ************************************ 00:06:20.129 START TEST nvmf_example 00:06:20.129 ************************************ 00:06:20.129 16:50:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:20.390 * Looking for test storage... 00:06:20.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.390 16:50:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.390 16:50:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:20.390 16:50:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.390 16:50:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.390 16:50:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.390 16:50:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.390 16:50:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.390 16:50:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.390 16:50:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.390 16:50:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.390 16:50:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:20.390 16:50:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:26.980 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:26.980 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:26.980 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:26.981 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:26.981 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.981 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:27.241 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:27.241 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:27.241 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:27.241 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:27.241 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:27.241 16:51:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:27.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:06:27.241 00:06:27.241 --- 10.0.0.2 ping statistics --- 00:06:27.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.241 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:27.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:06:27.241 00:06:27.241 --- 10.0.0.1 ping statistics --- 00:06:27.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.241 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1270996 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1270996 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:27.241 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 1270996 ']' 00:06:27.242 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.242 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.242 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.242 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.242 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.445 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:28.446 16:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:28.446 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.446 16:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.446 16:51:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.446 16:51:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:28.446 16:51:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.446 16:51:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.446 16:51:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.446 16:51:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:28.446 16:51:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:28.446 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.447 Initializing NVMe Controllers 00:06:38.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:38.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:38.447 Initialization complete. Launching workers. 00:06:38.447 ======================================================== 00:06:38.447 Latency(us) 00:06:38.447 Device Information : IOPS MiB/s Average min max 00:06:38.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18599.88 72.66 3440.50 868.00 16386.33 00:06:38.447 ======================================================== 00:06:38.447 Total : 18599.88 72.66 3440.50 868.00 16386.33 00:06:38.447 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:38.447 rmmod nvme_tcp 00:06:38.447 rmmod nvme_fabrics 00:06:38.447 rmmod nvme_keyring 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1270996 ']' 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1270996 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 1270996 ']' 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 1270996 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:38.447 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1270996 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1270996' 00:06:38.707 killing process with pid 1270996 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 1270996 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 1270996 00:06:38.707 nvmf threads initialize successfully 00:06:38.707 bdev subsystem init successfully 00:06:38.707 created a nvmf target service 00:06:38.707 create targets's poll groups done 00:06:38.707 all subsystems of target started 00:06:38.707 nvmf target is running 00:06:38.707 all subsystems of target stopped 00:06:38.707 destroy targets's poll groups done 00:06:38.707 destroyed the nvmf target service 00:06:38.707 bdev subsystem finish successfully 00:06:38.707 nvmf threads destroy successfully 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.707 16:51:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.256 16:51:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:41.256 16:51:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:41.256 16:51:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:41.256 16:51:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:41.256 00:06:41.256 real 0m20.665s 00:06:41.256 user 0m46.198s 00:06:41.256 sys 0m6.237s 00:06:41.256 16:51:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.256 16:51:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:41.256 ************************************ 00:06:41.256 END TEST nvmf_example 00:06:41.256 ************************************ 00:06:41.256 16:51:19 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:41.256 16:51:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:41.256 16:51:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.256 16:51:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.256 ************************************ 00:06:41.256 START TEST nvmf_filesystem 00:06:41.256 ************************************ 00:06:41.256 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:41.256 * Looking for test storage... 00:06:41.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.256 16:51:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:41.256 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:41.256 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:41.256 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:41.256 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:41.257 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:41.257 #define SPDK_CONFIG_H 00:06:41.257 #define SPDK_CONFIG_APPS 1 00:06:41.257 #define SPDK_CONFIG_ARCH native 00:06:41.257 #undef SPDK_CONFIG_ASAN 00:06:41.257 #undef SPDK_CONFIG_AVAHI 00:06:41.257 #undef SPDK_CONFIG_CET 00:06:41.257 #define SPDK_CONFIG_COVERAGE 1 00:06:41.257 #define SPDK_CONFIG_CROSS_PREFIX 00:06:41.257 #undef SPDK_CONFIG_CRYPTO 00:06:41.257 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:41.257 #undef SPDK_CONFIG_CUSTOMOCF 00:06:41.257 #undef SPDK_CONFIG_DAOS 00:06:41.257 #define SPDK_CONFIG_DAOS_DIR 00:06:41.257 #define SPDK_CONFIG_DEBUG 1 00:06:41.257 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:41.257 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:41.257 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:41.257 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:41.257 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:41.257 #undef SPDK_CONFIG_DPDK_UADK 00:06:41.257 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:41.257 #define SPDK_CONFIG_EXAMPLES 1 00:06:41.257 #undef SPDK_CONFIG_FC 00:06:41.257 #define SPDK_CONFIG_FC_PATH 00:06:41.257 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:41.257 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:41.257 #undef SPDK_CONFIG_FUSE 00:06:41.257 #undef SPDK_CONFIG_FUZZER 00:06:41.257 #define SPDK_CONFIG_FUZZER_LIB 00:06:41.257 #undef SPDK_CONFIG_GOLANG 00:06:41.257 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:41.257 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:41.257 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:41.257 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:41.257 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:41.257 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:41.258 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:41.258 #define SPDK_CONFIG_IDXD 1 00:06:41.258 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:41.258 #undef SPDK_CONFIG_IPSEC_MB 00:06:41.258 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:41.258 #define SPDK_CONFIG_ISAL 1 00:06:41.258 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:41.258 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:41.258 #define SPDK_CONFIG_LIBDIR 00:06:41.258 #undef SPDK_CONFIG_LTO 00:06:41.258 #define SPDK_CONFIG_MAX_LCORES 00:06:41.258 #define SPDK_CONFIG_NVME_CUSE 1 00:06:41.258 #undef SPDK_CONFIG_OCF 00:06:41.258 #define SPDK_CONFIG_OCF_PATH 00:06:41.258 #define SPDK_CONFIG_OPENSSL_PATH 00:06:41.258 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:41.258 #define SPDK_CONFIG_PGO_DIR 00:06:41.258 #undef SPDK_CONFIG_PGO_USE 00:06:41.258 #define SPDK_CONFIG_PREFIX /usr/local 00:06:41.258 #undef SPDK_CONFIG_RAID5F 00:06:41.258 #undef SPDK_CONFIG_RBD 00:06:41.258 #define SPDK_CONFIG_RDMA 1 00:06:41.258 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:41.258 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:41.258 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:41.258 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:41.258 #define SPDK_CONFIG_SHARED 1 00:06:41.258 #undef SPDK_CONFIG_SMA 00:06:41.258 #define SPDK_CONFIG_TESTS 1 00:06:41.258 #undef SPDK_CONFIG_TSAN 00:06:41.258 #define SPDK_CONFIG_UBLK 1 00:06:41.258 #define SPDK_CONFIG_UBSAN 1 00:06:41.258 #undef SPDK_CONFIG_UNIT_TESTS 00:06:41.258 #undef SPDK_CONFIG_URING 00:06:41.258 #define SPDK_CONFIG_URING_PATH 00:06:41.258 #undef SPDK_CONFIG_URING_ZNS 00:06:41.258 #undef SPDK_CONFIG_USDT 00:06:41.258 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:41.258 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:41.258 #define SPDK_CONFIG_VFIO_USER 1 00:06:41.258 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:41.258 #define SPDK_CONFIG_VHOST 1 00:06:41.258 #define SPDK_CONFIG_VIRTIO 1 00:06:41.258 #undef SPDK_CONFIG_VTUNE 00:06:41.258 #define SPDK_CONFIG_VTUNE_DIR 00:06:41.258 #define SPDK_CONFIG_WERROR 1 00:06:41.258 #define SPDK_CONFIG_WPDK_DIR 00:06:41.258 #undef SPDK_CONFIG_XNVME 00:06:41.258 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:41.258 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:41.259 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j144 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 1274223 ]] 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 1274223 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.WEMqKS 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.WEMqKS/tests/target /tmp/spdk.WEMqKS 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=967749632 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4316680192 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=122763882496 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=129371017216 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=6607134720 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64682131456 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685506560 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=25864511488 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=25874206720 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9695232 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=efivarfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=efivarfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=234496 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=507904 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=269312 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64685056000 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685510656 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=454656 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12937097216 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12937101312 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:06:41.260 * Looking for test storage... 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=122763882496 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=8821727232 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:41.260 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:41.261 16:51:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:47.842 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:47.843 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:47.843 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:47.843 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:47.843 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.843 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.103 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.103 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.103 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:48.103 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.103 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.103 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.103 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:48.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:06:48.103 00:06:48.103 --- 10.0.0.2 ping statistics --- 00:06:48.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.103 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:06:48.103 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:06:48.364 00:06:48.364 --- 10.0.0.1 ping statistics --- 00:06:48.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.364 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.364 ************************************ 00:06:48.364 START TEST nvmf_filesystem_no_in_capsule 00:06:48.364 ************************************ 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.364 16:51:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1277969 00:06:48.364 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1277969 00:06:48.364 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:48.364 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1277969 ']' 00:06:48.364 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.364 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:48.364 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.364 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:48.364 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.364 [2024-05-15 16:51:27.051070] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:06:48.364 [2024-05-15 16:51:27.051123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.364 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.364 [2024-05-15 16:51:27.116428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.364 [2024-05-15 16:51:27.183679] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.364 [2024-05-15 16:51:27.183717] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.364 [2024-05-15 16:51:27.183724] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.364 [2024-05-15 16:51:27.183731] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.364 [2024-05-15 16:51:27.183736] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.364 [2024-05-15 16:51:27.183872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.364 [2024-05-15 16:51:27.183987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.364 [2024-05-15 16:51:27.184147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.364 [2024-05-15 16:51:27.184148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.392 [2024-05-15 16:51:27.873243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.392 Malloc1 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.392 16:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.392 [2024-05-15 16:51:28.008788] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:49.392 [2024-05-15 16:51:28.009043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:49.392 { 00:06:49.392 "name": "Malloc1", 00:06:49.392 "aliases": [ 00:06:49.392 "68a66ed1-88c7-48d3-b56f-c56c15cac2f7" 00:06:49.392 ], 00:06:49.392 "product_name": "Malloc disk", 00:06:49.392 "block_size": 512, 00:06:49.392 "num_blocks": 1048576, 00:06:49.392 "uuid": "68a66ed1-88c7-48d3-b56f-c56c15cac2f7", 00:06:49.392 "assigned_rate_limits": { 00:06:49.392 "rw_ios_per_sec": 0, 00:06:49.392 "rw_mbytes_per_sec": 0, 00:06:49.392 "r_mbytes_per_sec": 0, 00:06:49.392 "w_mbytes_per_sec": 0 00:06:49.392 }, 00:06:49.392 "claimed": true, 00:06:49.392 "claim_type": "exclusive_write", 00:06:49.392 "zoned": false, 00:06:49.392 "supported_io_types": { 00:06:49.392 "read": true, 00:06:49.392 "write": true, 00:06:49.392 "unmap": true, 00:06:49.392 "write_zeroes": true, 00:06:49.392 "flush": true, 00:06:49.392 "reset": true, 00:06:49.392 "compare": false, 00:06:49.392 "compare_and_write": false, 00:06:49.392 "abort": true, 00:06:49.392 "nvme_admin": false, 00:06:49.392 "nvme_io": false 00:06:49.392 }, 00:06:49.392 "memory_domains": [ 00:06:49.392 { 00:06:49.392 "dma_device_id": "system", 00:06:49.392 "dma_device_type": 1 00:06:49.392 }, 00:06:49.392 { 00:06:49.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.392 "dma_device_type": 2 00:06:49.392 } 00:06:49.392 ], 00:06:49.392 "driver_specific": {} 00:06:49.392 } 00:06:49.392 ]' 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:49.392 16:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:50.793 16:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:50.793 16:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:50.793 16:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:50.793 16:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:50.793 16:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:53.330 16:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:53.951 16:51:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.890 ************************************ 00:06:54.890 START TEST filesystem_ext4 00:06:54.890 ************************************ 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:54.890 16:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:54.890 mke2fs 1.46.5 (30-Dec-2021) 00:06:54.890 Discarding device blocks: 0/522240 done 00:06:54.890 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:54.890 Filesystem UUID: b84db6a4-0eab-4e4e-a1fb-ed5a0cbff7c3 00:06:54.890 Superblock backups stored on blocks: 00:06:54.890 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:54.890 00:06:54.890 Allocating group tables: 0/64 done 00:06:54.890 Writing inode tables: 0/64 done 00:06:58.188 Creating journal (8192 blocks): done 00:06:58.449 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:06:58.449 00:06:58.449 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:58.449 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1277969 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:58.710 00:06:58.710 real 0m4.000s 00:06:58.710 user 0m0.024s 00:06:58.710 sys 0m0.076s 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.710 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:58.710 ************************************ 00:06:58.710 END TEST filesystem_ext4 00:06:58.710 ************************************ 00:06:58.971 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:58.971 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:58.971 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.971 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.971 ************************************ 00:06:58.971 START TEST filesystem_btrfs 00:06:58.972 ************************************ 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:58.972 btrfs-progs v6.6.2 00:06:58.972 See https://btrfs.readthedocs.io for more information. 00:06:58.972 00:06:58.972 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:58.972 NOTE: several default settings have changed in version 5.15, please make sure 00:06:58.972 this does not affect your deployments: 00:06:58.972 - DUP for metadata (-m dup) 00:06:58.972 - enabled no-holes (-O no-holes) 00:06:58.972 - enabled free-space-tree (-R free-space-tree) 00:06:58.972 00:06:58.972 Label: (null) 00:06:58.972 UUID: 6d8a9e71-1320-4ffb-af6d-6665c585e17d 00:06:58.972 Node size: 16384 00:06:58.972 Sector size: 4096 00:06:58.972 Filesystem size: 510.00MiB 00:06:58.972 Block group profiles: 00:06:58.972 Data: single 8.00MiB 00:06:58.972 Metadata: DUP 32.00MiB 00:06:58.972 System: DUP 8.00MiB 00:06:58.972 SSD detected: yes 00:06:58.972 Zoned device: no 00:06:58.972 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:58.972 Runtime features: free-space-tree 00:06:58.972 Checksum: crc32c 00:06:58.972 Number of devices: 1 00:06:58.972 Devices: 00:06:58.972 ID SIZE PATH 00:06:58.972 1 510.00MiB /dev/nvme0n1p1 00:06:58.972 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:58.972 16:51:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1277969 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:59.913 00:06:59.913 real 0m1.098s 00:06:59.913 user 0m0.035s 00:06:59.913 sys 0m0.122s 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:59.913 ************************************ 00:06:59.913 END TEST filesystem_btrfs 00:06:59.913 ************************************ 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.913 ************************************ 00:06:59.913 START TEST filesystem_xfs 00:06:59.913 ************************************ 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:59.913 16:51:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:00.173 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:00.173 = sectsz=512 attr=2, projid32bit=1 00:07:00.173 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:00.173 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:00.173 data = bsize=4096 blocks=130560, imaxpct=25 00:07:00.173 = sunit=0 swidth=0 blks 00:07:00.173 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:00.173 log =internal log bsize=4096 blocks=16384, version=2 00:07:00.173 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:00.173 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:01.111 Discarding blocks...Done. 00:07:01.111 16:51:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:01.111 16:51:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:03.018 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:03.018 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:03.018 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:03.018 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:03.018 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:03.018 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:03.018 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1277969 00:07:03.018 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:03.018 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:03.018 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:03.019 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:03.019 00:07:03.019 real 0m2.988s 00:07:03.019 user 0m0.028s 00:07:03.019 sys 0m0.075s 00:07:03.019 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.019 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:03.019 ************************************ 00:07:03.019 END TEST filesystem_xfs 00:07:03.019 ************************************ 00:07:03.019 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:03.280 16:51:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:03.540 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:03.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.801 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:03.801 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1277969 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1277969 ']' 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1277969 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1277969 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1277969' 00:07:03.802 killing process with pid 1277969 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 1277969 00:07:03.802 [2024-05-15 16:51:42.567123] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:03.802 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 1277969 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:04.071 00:07:04.071 real 0m15.806s 00:07:04.071 user 1m2.396s 00:07:04.071 sys 0m1.185s 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 ************************************ 00:07:04.071 END TEST nvmf_filesystem_no_in_capsule 00:07:04.071 ************************************ 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 ************************************ 00:07:04.071 START TEST nvmf_filesystem_in_capsule 00:07:04.071 ************************************ 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1281178 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1281178 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1281178 ']' 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.071 16:51:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 [2024-05-15 16:51:42.902582] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:07:04.071 [2024-05-15 16:51:42.902632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.332 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.332 [2024-05-15 16:51:42.967621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.332 [2024-05-15 16:51:43.031780] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.332 [2024-05-15 16:51:43.031833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.332 [2024-05-15 16:51:43.031841] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.332 [2024-05-15 16:51:43.031847] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.332 [2024-05-15 16:51:43.031854] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.332 [2024-05-15 16:51:43.031991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.332 [2024-05-15 16:51:43.032107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.332 [2024-05-15 16:51:43.032261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.332 [2024-05-15 16:51:43.032262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.903 [2024-05-15 16:51:43.721196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.903 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.164 Malloc1 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.164 [2024-05-15 16:51:43.843618] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:05.164 [2024-05-15 16:51:43.843868] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:05.164 { 00:07:05.164 "name": "Malloc1", 00:07:05.164 "aliases": [ 00:07:05.164 "ac1e38ae-0481-4684-839c-ee5d39aa0733" 00:07:05.164 ], 00:07:05.164 "product_name": "Malloc disk", 00:07:05.164 "block_size": 512, 00:07:05.164 "num_blocks": 1048576, 00:07:05.164 "uuid": "ac1e38ae-0481-4684-839c-ee5d39aa0733", 00:07:05.164 "assigned_rate_limits": { 00:07:05.164 "rw_ios_per_sec": 0, 00:07:05.164 "rw_mbytes_per_sec": 0, 00:07:05.164 "r_mbytes_per_sec": 0, 00:07:05.164 "w_mbytes_per_sec": 0 00:07:05.164 }, 00:07:05.164 "claimed": true, 00:07:05.164 "claim_type": "exclusive_write", 00:07:05.164 "zoned": false, 00:07:05.164 "supported_io_types": { 00:07:05.164 "read": true, 00:07:05.164 "write": true, 00:07:05.164 "unmap": true, 00:07:05.164 "write_zeroes": true, 00:07:05.164 "flush": true, 00:07:05.164 "reset": true, 00:07:05.164 "compare": false, 00:07:05.164 "compare_and_write": false, 00:07:05.164 "abort": true, 00:07:05.164 "nvme_admin": false, 00:07:05.164 "nvme_io": false 00:07:05.164 }, 00:07:05.164 "memory_domains": [ 00:07:05.164 { 00:07:05.164 "dma_device_id": "system", 00:07:05.164 "dma_device_type": 1 00:07:05.164 }, 00:07:05.164 { 00:07:05.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.164 "dma_device_type": 2 00:07:05.164 } 00:07:05.164 ], 00:07:05.164 "driver_specific": {} 00:07:05.164 } 00:07:05.164 ]' 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:05.164 16:51:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:07.076 16:51:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:07.077 16:51:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:07.077 16:51:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:07.077 16:51:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:07.077 16:51:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:08.985 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:09.246 16:51:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:10.186 ************************************ 00:07:10.186 START TEST filesystem_in_capsule_ext4 00:07:10.186 ************************************ 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:10.186 16:51:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:10.186 mke2fs 1.46.5 (30-Dec-2021) 00:07:10.445 Discarding device blocks: 0/522240 done 00:07:10.445 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:10.445 Filesystem UUID: dafef5e7-965f-47a3-9b07-5d7948a09591 00:07:10.445 Superblock backups stored on blocks: 00:07:10.445 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:10.445 00:07:10.445 Allocating group tables: 0/64 done 00:07:10.445 Writing inode tables: 0/64 done 00:07:10.445 Creating journal (8192 blocks): done 00:07:11.384 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:07:11.384 00:07:11.384 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:11.384 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1281178 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:11.645 00:07:11.645 real 0m1.403s 00:07:11.645 user 0m0.025s 00:07:11.645 sys 0m0.071s 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:11.645 ************************************ 00:07:11.645 END TEST filesystem_in_capsule_ext4 00:07:11.645 ************************************ 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.645 ************************************ 00:07:11.645 START TEST filesystem_in_capsule_btrfs 00:07:11.645 ************************************ 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:11.645 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:11.905 btrfs-progs v6.6.2 00:07:11.905 See https://btrfs.readthedocs.io for more information. 00:07:11.905 00:07:11.905 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:11.905 NOTE: several default settings have changed in version 5.15, please make sure 00:07:11.905 this does not affect your deployments: 00:07:11.905 - DUP for metadata (-m dup) 00:07:11.905 - enabled no-holes (-O no-holes) 00:07:11.905 - enabled free-space-tree (-R free-space-tree) 00:07:11.905 00:07:11.905 Label: (null) 00:07:11.905 UUID: 0bf7494b-9dfa-4b38-b2eb-618b6129c632 00:07:11.905 Node size: 16384 00:07:11.905 Sector size: 4096 00:07:11.905 Filesystem size: 510.00MiB 00:07:11.905 Block group profiles: 00:07:11.905 Data: single 8.00MiB 00:07:11.905 Metadata: DUP 32.00MiB 00:07:11.905 System: DUP 8.00MiB 00:07:11.905 SSD detected: yes 00:07:11.905 Zoned device: no 00:07:11.906 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:11.906 Runtime features: free-space-tree 00:07:11.906 Checksum: crc32c 00:07:11.906 Number of devices: 1 00:07:11.906 Devices: 00:07:11.906 ID SIZE PATH 00:07:11.906 1 510.00MiB /dev/nvme0n1p1 00:07:11.906 00:07:11.906 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:11.906 16:51:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1281178 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.286 00:07:13.286 real 0m1.346s 00:07:13.286 user 0m0.032s 00:07:13.286 sys 0m0.129s 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:13.286 ************************************ 00:07:13.286 END TEST filesystem_in_capsule_btrfs 00:07:13.286 ************************************ 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:13.286 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.287 ************************************ 00:07:13.287 START TEST filesystem_in_capsule_xfs 00:07:13.287 ************************************ 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:13.287 16:51:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:13.287 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:13.287 = sectsz=512 attr=2, projid32bit=1 00:07:13.287 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:13.287 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:13.287 data = bsize=4096 blocks=130560, imaxpct=25 00:07:13.287 = sunit=0 swidth=0 blks 00:07:13.287 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:13.287 log =internal log bsize=4096 blocks=16384, version=2 00:07:13.287 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:13.287 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:14.232 Discarding blocks...Done. 00:07:14.232 16:51:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:14.232 16:51:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1281178 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.145 00:07:16.145 real 0m2.965s 00:07:16.145 user 0m0.028s 00:07:16.145 sys 0m0.074s 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:16.145 ************************************ 00:07:16.145 END TEST filesystem_in_capsule_xfs 00:07:16.145 ************************************ 00:07:16.145 16:51:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:16.404 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:16.404 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:16.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.404 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:16.404 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:16.404 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:16.404 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1281178 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1281178 ']' 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1281178 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1281178 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1281178' 00:07:16.664 killing process with pid 1281178 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 1281178 00:07:16.664 [2024-05-15 16:51:55.330867] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:16.664 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 1281178 00:07:16.924 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:16.924 00:07:16.924 real 0m12.716s 00:07:16.924 user 0m50.106s 00:07:16.924 sys 0m1.173s 00:07:16.924 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.925 ************************************ 00:07:16.925 END TEST nvmf_filesystem_in_capsule 00:07:16.925 ************************************ 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:16.925 rmmod nvme_tcp 00:07:16.925 rmmod nvme_fabrics 00:07:16.925 rmmod nvme_keyring 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.925 16:51:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.472 16:51:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:19.472 00:07:19.472 real 0m38.156s 00:07:19.472 user 1m54.750s 00:07:19.472 sys 0m7.682s 00:07:19.472 16:51:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.472 16:51:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 ************************************ 00:07:19.472 END TEST nvmf_filesystem 00:07:19.472 ************************************ 00:07:19.472 16:51:57 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:19.472 16:51:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:19.472 16:51:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.472 16:51:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 ************************************ 00:07:19.472 START TEST nvmf_target_discovery 00:07:19.472 ************************************ 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:19.472 * Looking for test storage... 00:07:19.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.472 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:19.473 16:51:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:26.096 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:26.097 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:26.097 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:26.097 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:26.097 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.097 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:26.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.749 ms 00:07:26.357 00:07:26.357 --- 10.0.0.2 ping statistics --- 00:07:26.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.357 rtt min/avg/max/mdev = 0.749/0.749/0.749/0.000 ms 00:07:26.357 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:07:26.357 00:07:26.357 --- 10.0.0.1 ping statistics --- 00:07:26.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.357 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:07:26.357 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.357 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1288020 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1288020 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 1288020 ']' 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:26.358 16:52:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.358 [2024-05-15 16:52:05.038415] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:07:26.358 [2024-05-15 16:52:05.038477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.358 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.358 [2024-05-15 16:52:05.109415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.358 [2024-05-15 16:52:05.184419] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.358 [2024-05-15 16:52:05.184472] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.358 [2024-05-15 16:52:05.184481] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.358 [2024-05-15 16:52:05.184487] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.358 [2024-05-15 16:52:05.184493] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.358 [2024-05-15 16:52:05.184581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.358 [2024-05-15 16:52:05.184803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.358 [2024-05-15 16:52:05.184804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.358 [2024-05-15 16:52:05.184657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 [2024-05-15 16:52:05.870136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 Null1 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 [2024-05-15 16:52:05.930276] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:27.300 [2024-05-15 16:52:05.930473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 Null2 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 Null3 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 Null4 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.300 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:07:27.561 00:07:27.561 Discovery Log Number of Records 6, Generation counter 6 00:07:27.561 =====Discovery Log Entry 0====== 00:07:27.561 trtype: tcp 00:07:27.561 adrfam: ipv4 00:07:27.561 subtype: current discovery subsystem 00:07:27.561 treq: not required 00:07:27.561 portid: 0 00:07:27.561 trsvcid: 4420 00:07:27.561 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:27.561 traddr: 10.0.0.2 00:07:27.561 eflags: explicit discovery connections, duplicate discovery information 00:07:27.561 sectype: none 00:07:27.561 =====Discovery Log Entry 1====== 00:07:27.561 trtype: tcp 00:07:27.561 adrfam: ipv4 00:07:27.561 subtype: nvme subsystem 00:07:27.561 treq: not required 00:07:27.561 portid: 0 00:07:27.561 trsvcid: 4420 00:07:27.561 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:27.561 traddr: 10.0.0.2 00:07:27.561 eflags: none 00:07:27.561 sectype: none 00:07:27.561 =====Discovery Log Entry 2====== 00:07:27.561 trtype: tcp 00:07:27.561 adrfam: ipv4 00:07:27.561 subtype: nvme subsystem 00:07:27.561 treq: not required 00:07:27.561 portid: 0 00:07:27.561 trsvcid: 4420 00:07:27.561 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:27.561 traddr: 10.0.0.2 00:07:27.561 eflags: none 00:07:27.561 sectype: none 00:07:27.561 =====Discovery Log Entry 3====== 00:07:27.561 trtype: tcp 00:07:27.561 adrfam: ipv4 00:07:27.561 subtype: nvme subsystem 00:07:27.561 treq: not required 00:07:27.561 portid: 0 00:07:27.561 trsvcid: 4420 00:07:27.561 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:27.561 traddr: 10.0.0.2 00:07:27.561 eflags: none 00:07:27.561 sectype: none 00:07:27.561 =====Discovery Log Entry 4====== 00:07:27.561 trtype: tcp 00:07:27.561 adrfam: ipv4 00:07:27.561 subtype: nvme subsystem 00:07:27.561 treq: not required 00:07:27.561 portid: 0 00:07:27.561 trsvcid: 4420 00:07:27.561 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:27.561 traddr: 10.0.0.2 00:07:27.561 eflags: none 00:07:27.561 sectype: none 00:07:27.561 =====Discovery Log Entry 5====== 00:07:27.561 trtype: tcp 00:07:27.561 adrfam: ipv4 00:07:27.561 subtype: discovery subsystem referral 00:07:27.561 treq: not required 00:07:27.561 portid: 0 00:07:27.561 trsvcid: 4430 00:07:27.561 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:27.561 traddr: 10.0.0.2 00:07:27.561 eflags: none 00:07:27.561 sectype: none 00:07:27.561 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:27.561 Perform nvmf subsystem discovery via RPC 00:07:27.561 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:27.561 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.561 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.561 [ 00:07:27.561 { 00:07:27.561 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:27.561 "subtype": "Discovery", 00:07:27.561 "listen_addresses": [ 00:07:27.561 { 00:07:27.561 "trtype": "TCP", 00:07:27.561 "adrfam": "IPv4", 00:07:27.561 "traddr": "10.0.0.2", 00:07:27.561 "trsvcid": "4420" 00:07:27.561 } 00:07:27.561 ], 00:07:27.561 "allow_any_host": true, 00:07:27.561 "hosts": [] 00:07:27.561 }, 00:07:27.561 { 00:07:27.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.561 "subtype": "NVMe", 00:07:27.561 "listen_addresses": [ 00:07:27.561 { 00:07:27.561 "trtype": "TCP", 00:07:27.561 "adrfam": "IPv4", 00:07:27.561 "traddr": "10.0.0.2", 00:07:27.561 "trsvcid": "4420" 00:07:27.561 } 00:07:27.561 ], 00:07:27.561 "allow_any_host": true, 00:07:27.561 "hosts": [], 00:07:27.561 "serial_number": "SPDK00000000000001", 00:07:27.561 "model_number": "SPDK bdev Controller", 00:07:27.561 "max_namespaces": 32, 00:07:27.561 "min_cntlid": 1, 00:07:27.561 "max_cntlid": 65519, 00:07:27.561 "namespaces": [ 00:07:27.561 { 00:07:27.561 "nsid": 1, 00:07:27.561 "bdev_name": "Null1", 00:07:27.561 "name": "Null1", 00:07:27.561 "nguid": "D2E5AFCCCACF48DD824BAF83E2F1649F", 00:07:27.562 "uuid": "d2e5afcc-cacf-48dd-824b-af83e2f1649f" 00:07:27.562 } 00:07:27.562 ] 00:07:27.562 }, 00:07:27.562 { 00:07:27.562 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:27.562 "subtype": "NVMe", 00:07:27.562 "listen_addresses": [ 00:07:27.562 { 00:07:27.562 "trtype": "TCP", 00:07:27.562 "adrfam": "IPv4", 00:07:27.562 "traddr": "10.0.0.2", 00:07:27.562 "trsvcid": "4420" 00:07:27.562 } 00:07:27.562 ], 00:07:27.562 "allow_any_host": true, 00:07:27.562 "hosts": [], 00:07:27.562 "serial_number": "SPDK00000000000002", 00:07:27.562 "model_number": "SPDK bdev Controller", 00:07:27.562 "max_namespaces": 32, 00:07:27.562 "min_cntlid": 1, 00:07:27.562 "max_cntlid": 65519, 00:07:27.562 "namespaces": [ 00:07:27.562 { 00:07:27.562 "nsid": 1, 00:07:27.562 "bdev_name": "Null2", 00:07:27.562 "name": "Null2", 00:07:27.562 "nguid": "214F21DB91D8448AAE786640CB7B0BCE", 00:07:27.562 "uuid": "214f21db-91d8-448a-ae78-6640cb7b0bce" 00:07:27.562 } 00:07:27.562 ] 00:07:27.562 }, 00:07:27.562 { 00:07:27.562 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:27.562 "subtype": "NVMe", 00:07:27.562 "listen_addresses": [ 00:07:27.562 { 00:07:27.562 "trtype": "TCP", 00:07:27.562 "adrfam": "IPv4", 00:07:27.562 "traddr": "10.0.0.2", 00:07:27.562 "trsvcid": "4420" 00:07:27.562 } 00:07:27.562 ], 00:07:27.562 "allow_any_host": true, 00:07:27.562 "hosts": [], 00:07:27.562 "serial_number": "SPDK00000000000003", 00:07:27.562 "model_number": "SPDK bdev Controller", 00:07:27.562 "max_namespaces": 32, 00:07:27.562 "min_cntlid": 1, 00:07:27.562 "max_cntlid": 65519, 00:07:27.562 "namespaces": [ 00:07:27.562 { 00:07:27.562 "nsid": 1, 00:07:27.562 "bdev_name": "Null3", 00:07:27.562 "name": "Null3", 00:07:27.562 "nguid": "21E8DF3B7A4140DA96B3D5B341FD537F", 00:07:27.562 "uuid": "21e8df3b-7a41-40da-96b3-d5b341fd537f" 00:07:27.562 } 00:07:27.562 ] 00:07:27.562 }, 00:07:27.562 { 00:07:27.562 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:27.562 "subtype": "NVMe", 00:07:27.562 "listen_addresses": [ 00:07:27.562 { 00:07:27.562 "trtype": "TCP", 00:07:27.562 "adrfam": "IPv4", 00:07:27.562 "traddr": "10.0.0.2", 00:07:27.562 "trsvcid": "4420" 00:07:27.562 } 00:07:27.562 ], 00:07:27.562 "allow_any_host": true, 00:07:27.562 "hosts": [], 00:07:27.562 "serial_number": "SPDK00000000000004", 00:07:27.562 "model_number": "SPDK bdev Controller", 00:07:27.562 "max_namespaces": 32, 00:07:27.562 "min_cntlid": 1, 00:07:27.562 "max_cntlid": 65519, 00:07:27.562 "namespaces": [ 00:07:27.562 { 00:07:27.562 "nsid": 1, 00:07:27.562 "bdev_name": "Null4", 00:07:27.562 "name": "Null4", 00:07:27.562 "nguid": "D9F57C02564A44338563DF193F4825B3", 00:07:27.562 "uuid": "d9f57c02-564a-4433-8563-df193f4825b3" 00:07:27.562 } 00:07:27.562 ] 00:07:27.562 } 00:07:27.562 ] 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.562 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.823 rmmod nvme_tcp 00:07:27.823 rmmod nvme_fabrics 00:07:27.823 rmmod nvme_keyring 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1288020 ']' 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1288020 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 1288020 ']' 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 1288020 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1288020 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1288020' 00:07:27.823 killing process with pid 1288020 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 1288020 00:07:27.823 [2024-05-15 16:52:06.524559] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 1288020 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.823 16:52:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.366 16:52:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:30.366 00:07:30.366 real 0m10.925s 00:07:30.366 user 0m8.112s 00:07:30.366 sys 0m5.515s 00:07:30.367 16:52:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.367 16:52:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.367 ************************************ 00:07:30.367 END TEST nvmf_target_discovery 00:07:30.367 ************************************ 00:07:30.367 16:52:08 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:30.367 16:52:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:30.367 16:52:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.367 16:52:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.367 ************************************ 00:07:30.367 START TEST nvmf_referrals 00:07:30.367 ************************************ 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:30.367 * Looking for test storage... 00:07:30.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.367 16:52:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.963 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.963 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.963 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.963 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.963 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.963 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:36.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:36.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:36.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:36.964 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:36.964 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.723 ms 00:07:37.225 00:07:37.225 --- 10.0.0.2 ping statistics --- 00:07:37.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.225 rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:07:37.225 00:07:37.225 --- 10.0.0.1 ping statistics --- 00:07:37.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.225 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1292359 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1292359 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 1292359 ']' 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:37.225 16:52:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:37.225 [2024-05-15 16:52:15.981860] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:07:37.225 [2024-05-15 16:52:15.981920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.225 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.225 [2024-05-15 16:52:16.052933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.485 [2024-05-15 16:52:16.128079] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.485 [2024-05-15 16:52:16.128118] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.485 [2024-05-15 16:52:16.128125] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.485 [2024-05-15 16:52:16.128132] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.485 [2024-05-15 16:52:16.128137] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.485 [2024-05-15 16:52:16.128275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.485 [2024-05-15 16:52:16.128391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.485 [2024-05-15 16:52:16.128555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.485 [2024-05-15 16:52:16.128569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.053 [2024-05-15 16:52:16.808109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.053 [2024-05-15 16:52:16.824100] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:38.053 [2024-05-15 16:52:16.824304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.053 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:38.054 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.054 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.054 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.054 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:38.054 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:38.054 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.054 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.054 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:38.314 16:52:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.314 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:38.573 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:38.832 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:38.832 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:38.832 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:38.832 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:38.832 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:38.832 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:38.832 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:38.832 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:38.832 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:38.832 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:38.832 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:39.091 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:39.091 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:39.091 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:39.091 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:39.091 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.091 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:39.091 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.091 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:39.091 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:39.092 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:39.351 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:39.351 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:39.351 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:39.351 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:39.351 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:39.351 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:39.351 16:52:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:39.351 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:39.351 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:39.351 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:39.351 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:39.351 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:39.351 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:39.611 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:39.611 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.612 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.612 rmmod nvme_tcp 00:07:39.872 rmmod nvme_fabrics 00:07:39.872 rmmod nvme_keyring 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1292359 ']' 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1292359 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 1292359 ']' 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 1292359 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1292359 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1292359' 00:07:39.872 killing process with pid 1292359 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 1292359 00:07:39.872 [2024-05-15 16:52:18.553710] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 1292359 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.872 16:52:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.416 16:52:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:42.416 00:07:42.416 real 0m11.981s 00:07:42.416 user 0m13.128s 00:07:42.416 sys 0m5.880s 00:07:42.416 16:52:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.416 16:52:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:42.416 ************************************ 00:07:42.416 END TEST nvmf_referrals 00:07:42.416 ************************************ 00:07:42.416 16:52:20 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:42.416 16:52:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:42.416 16:52:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.416 16:52:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.416 ************************************ 00:07:42.416 START TEST nvmf_connect_disconnect 00:07:42.416 ************************************ 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:42.416 * Looking for test storage... 00:07:42.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:42.416 16:52:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.001 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:49.002 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:49.002 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:49.002 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:49.002 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.002 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:49.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:07:49.263 00:07:49.263 --- 10.0.0.2 ping statistics --- 00:07:49.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.263 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:07:49.263 00:07:49.263 --- 10.0.0.1 ping statistics --- 00:07:49.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.263 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:49.263 16:52:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:49.263 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1297091 00:07:49.263 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1297091 00:07:49.263 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:49.263 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 1297091 ']' 00:07:49.263 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.263 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:49.263 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.264 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:49.264 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:49.264 [2024-05-15 16:52:28.057174] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:07:49.264 [2024-05-15 16:52:28.057225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.264 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.523 [2024-05-15 16:52:28.123865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.523 [2024-05-15 16:52:28.191918] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.523 [2024-05-15 16:52:28.191950] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.523 [2024-05-15 16:52:28.191958] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.523 [2024-05-15 16:52:28.191964] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.523 [2024-05-15 16:52:28.191970] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.523 [2024-05-15 16:52:28.192108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.523 [2024-05-15 16:52:28.192222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.523 [2024-05-15 16:52:28.192375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.523 [2024-05-15 16:52:28.192377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.095 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:50.095 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:07:50.095 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:50.095 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.095 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.095 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.095 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:50.095 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.096 [2024-05-15 16:52:28.883109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.096 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.357 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.357 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.357 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.357 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.357 [2024-05-15 16:52:28.942280] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:50.357 [2024-05-15 16:52:28.942504] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.357 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.357 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:50.357 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:50.357 16:52:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:54.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.693 rmmod nvme_tcp 00:08:08.693 rmmod nvme_fabrics 00:08:08.693 rmmod nvme_keyring 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1297091 ']' 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1297091 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1297091 ']' 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 1297091 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1297091 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1297091' 00:08:08.693 killing process with pid 1297091 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 1297091 00:08:08.693 [2024-05-15 16:52:47.247105] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 1297091 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.693 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.694 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.694 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.694 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.694 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.694 16:52:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.235 16:52:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:11.235 00:08:11.235 real 0m28.662s 00:08:11.235 user 1m18.567s 00:08:11.235 sys 0m6.385s 00:08:11.235 16:52:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.235 16:52:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:11.235 ************************************ 00:08:11.235 END TEST nvmf_connect_disconnect 00:08:11.235 ************************************ 00:08:11.235 16:52:49 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:11.235 16:52:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:11.235 16:52:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.235 16:52:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.235 ************************************ 00:08:11.235 START TEST nvmf_multitarget 00:08:11.235 ************************************ 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:11.235 * Looking for test storage... 00:08:11.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.235 16:52:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:11.236 16:52:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:17.820 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:17.820 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:17.820 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:17.820 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:17.820 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:08:17.821 00:08:17.821 --- 10.0.0.2 ping statistics --- 00:08:17.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.821 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:08:17.821 00:08:17.821 --- 10.0.0.1 ping statistics --- 00:08:17.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.821 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1305135 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1305135 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 1305135 ']' 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:17.821 16:52:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:17.821 [2024-05-15 16:52:56.652296] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:08:17.821 [2024-05-15 16:52:56.652374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.082 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.082 [2024-05-15 16:52:56.723862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.082 [2024-05-15 16:52:56.798027] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.082 [2024-05-15 16:52:56.798069] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.082 [2024-05-15 16:52:56.798077] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.082 [2024-05-15 16:52:56.798083] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.082 [2024-05-15 16:52:56.798089] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.082 [2024-05-15 16:52:56.798226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.082 [2024-05-15 16:52:56.798341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.082 [2024-05-15 16:52:56.798497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.082 [2024-05-15 16:52:56.798497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.654 16:52:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:18.654 16:52:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:08:18.654 16:52:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.654 16:52:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.654 16:52:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:18.654 16:52:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.654 16:52:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:18.654 16:52:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:18.654 16:52:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:18.915 16:52:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:18.915 16:52:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:18.915 "nvmf_tgt_1" 00:08:18.915 16:52:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:18.915 "nvmf_tgt_2" 00:08:19.175 16:52:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:19.175 16:52:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:19.175 16:52:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:19.175 16:52:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:19.175 true 00:08:19.176 16:52:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:19.437 true 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.437 rmmod nvme_tcp 00:08:19.437 rmmod nvme_fabrics 00:08:19.437 rmmod nvme_keyring 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1305135 ']' 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1305135 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 1305135 ']' 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 1305135 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:19.437 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1305135 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1305135' 00:08:19.698 killing process with pid 1305135 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 1305135 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 1305135 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.698 16:52:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.246 16:53:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.246 00:08:22.246 real 0m10.979s 00:08:22.246 user 0m9.153s 00:08:22.246 sys 0m5.599s 00:08:22.246 16:53:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.246 16:53:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:22.246 ************************************ 00:08:22.246 END TEST nvmf_multitarget 00:08:22.246 ************************************ 00:08:22.246 16:53:00 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:22.246 16:53:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:22.246 16:53:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.246 16:53:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.246 ************************************ 00:08:22.246 START TEST nvmf_rpc 00:08:22.246 ************************************ 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:22.246 * Looking for test storage... 00:08:22.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.246 16:53:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:28.839 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:28.839 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:28.839 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:28.839 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:28.840 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.840 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.100 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.100 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.100 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.100 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:08:29.101 00:08:29.101 --- 10.0.0.2 ping statistics --- 00:08:29.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.101 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:08:29.101 00:08:29.101 --- 10.0.0.1 ping statistics --- 00:08:29.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.101 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1309473 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1309473 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 1309473 ']' 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:29.101 16:53:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.362 [2024-05-15 16:53:07.948124] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:08:29.363 [2024-05-15 16:53:07.948189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.363 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.363 [2024-05-15 16:53:08.018721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.363 [2024-05-15 16:53:08.093584] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.363 [2024-05-15 16:53:08.093625] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.363 [2024-05-15 16:53:08.093633] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.363 [2024-05-15 16:53:08.093639] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.363 [2024-05-15 16:53:08.093645] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.363 [2024-05-15 16:53:08.093788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.363 [2024-05-15 16:53:08.093906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.363 [2024-05-15 16:53:08.094063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.363 [2024-05-15 16:53:08.094064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.935 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:29.935 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:29.935 16:53:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.935 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.935 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.935 16:53:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.935 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:29.935 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.935 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:30.196 "tick_rate": 2400000000, 00:08:30.196 "poll_groups": [ 00:08:30.196 { 00:08:30.196 "name": "nvmf_tgt_poll_group_000", 00:08:30.196 "admin_qpairs": 0, 00:08:30.196 "io_qpairs": 0, 00:08:30.196 "current_admin_qpairs": 0, 00:08:30.196 "current_io_qpairs": 0, 00:08:30.196 "pending_bdev_io": 0, 00:08:30.196 "completed_nvme_io": 0, 00:08:30.196 "transports": [] 00:08:30.196 }, 00:08:30.196 { 00:08:30.196 "name": "nvmf_tgt_poll_group_001", 00:08:30.196 "admin_qpairs": 0, 00:08:30.196 "io_qpairs": 0, 00:08:30.196 "current_admin_qpairs": 0, 00:08:30.196 "current_io_qpairs": 0, 00:08:30.196 "pending_bdev_io": 0, 00:08:30.196 "completed_nvme_io": 0, 00:08:30.196 "transports": [] 00:08:30.196 }, 00:08:30.196 { 00:08:30.196 "name": "nvmf_tgt_poll_group_002", 00:08:30.196 "admin_qpairs": 0, 00:08:30.196 "io_qpairs": 0, 00:08:30.196 "current_admin_qpairs": 0, 00:08:30.196 "current_io_qpairs": 0, 00:08:30.196 "pending_bdev_io": 0, 00:08:30.196 "completed_nvme_io": 0, 00:08:30.196 "transports": [] 00:08:30.196 }, 00:08:30.196 { 00:08:30.196 "name": "nvmf_tgt_poll_group_003", 00:08:30.196 "admin_qpairs": 0, 00:08:30.196 "io_qpairs": 0, 00:08:30.196 "current_admin_qpairs": 0, 00:08:30.196 "current_io_qpairs": 0, 00:08:30.196 "pending_bdev_io": 0, 00:08:30.196 "completed_nvme_io": 0, 00:08:30.196 "transports": [] 00:08:30.196 } 00:08:30.196 ] 00:08:30.196 }' 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 [2024-05-15 16:53:08.888445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.196 16:53:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.197 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:30.197 "tick_rate": 2400000000, 00:08:30.197 "poll_groups": [ 00:08:30.197 { 00:08:30.197 "name": "nvmf_tgt_poll_group_000", 00:08:30.197 "admin_qpairs": 0, 00:08:30.197 "io_qpairs": 0, 00:08:30.197 "current_admin_qpairs": 0, 00:08:30.197 "current_io_qpairs": 0, 00:08:30.197 "pending_bdev_io": 0, 00:08:30.197 "completed_nvme_io": 0, 00:08:30.197 "transports": [ 00:08:30.197 { 00:08:30.197 "trtype": "TCP" 00:08:30.197 } 00:08:30.197 ] 00:08:30.197 }, 00:08:30.197 { 00:08:30.197 "name": "nvmf_tgt_poll_group_001", 00:08:30.197 "admin_qpairs": 0, 00:08:30.197 "io_qpairs": 0, 00:08:30.197 "current_admin_qpairs": 0, 00:08:30.197 "current_io_qpairs": 0, 00:08:30.197 "pending_bdev_io": 0, 00:08:30.197 "completed_nvme_io": 0, 00:08:30.197 "transports": [ 00:08:30.197 { 00:08:30.197 "trtype": "TCP" 00:08:30.197 } 00:08:30.197 ] 00:08:30.197 }, 00:08:30.197 { 00:08:30.197 "name": "nvmf_tgt_poll_group_002", 00:08:30.197 "admin_qpairs": 0, 00:08:30.197 "io_qpairs": 0, 00:08:30.197 "current_admin_qpairs": 0, 00:08:30.197 "current_io_qpairs": 0, 00:08:30.197 "pending_bdev_io": 0, 00:08:30.197 "completed_nvme_io": 0, 00:08:30.197 "transports": [ 00:08:30.197 { 00:08:30.197 "trtype": "TCP" 00:08:30.197 } 00:08:30.197 ] 00:08:30.197 }, 00:08:30.197 { 00:08:30.197 "name": "nvmf_tgt_poll_group_003", 00:08:30.197 "admin_qpairs": 0, 00:08:30.197 "io_qpairs": 0, 00:08:30.197 "current_admin_qpairs": 0, 00:08:30.197 "current_io_qpairs": 0, 00:08:30.197 "pending_bdev_io": 0, 00:08:30.197 "completed_nvme_io": 0, 00:08:30.197 "transports": [ 00:08:30.197 { 00:08:30.197 "trtype": "TCP" 00:08:30.197 } 00:08:30.197 ] 00:08:30.197 } 00:08:30.197 ] 00:08:30.197 }' 00:08:30.197 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:30.197 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:30.197 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:30.197 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.197 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:30.197 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:30.197 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:30.197 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:30.197 16:53:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.197 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:30.197 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:30.197 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:30.197 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:30.197 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:30.197 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.197 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.459 Malloc1 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.459 [2024-05-15 16:53:09.076026] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:30.459 [2024-05-15 16:53:09.076234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:08:30.459 [2024-05-15 16:53:09.103123] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:08:30.459 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:30.459 could not add new controller: failed to write to nvme-fabrics device 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.459 16:53:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:31.843 16:53:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:31.843 16:53:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:31.843 16:53:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:31.843 16:53:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:31.843 16:53:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.387 [2024-05-15 16:53:12.817229] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:08:34.387 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:34.387 could not add new controller: failed to write to nvme-fabrics device 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.387 16:53:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.774 16:53:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.774 16:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:35.774 16:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.774 16:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:35.774 16:53:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:37.691 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:37.691 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:37.691 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.691 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:37.691 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.691 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:37.691 16:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.691 16:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.691 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:37.691 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:37.691 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.952 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:37.952 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.952 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:37.952 16:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.953 [2024-05-15 16:53:16.574853] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.953 16:53:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:39.336 16:53:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:39.336 16:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:39.336 16:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:39.336 16:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:39.336 16:53:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:41.249 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:41.249 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:41.249 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:41.509 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:41.509 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:41.509 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:41.509 16:53:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:41.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.509 16:53:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:41.509 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:41.509 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:41.509 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.510 [2024-05-15 16:53:20.319001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.510 16:53:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.770 16:53:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:43.207 16:53:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:43.207 16:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:43.207 16:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:43.207 16:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:43.207 16:53:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:45.122 16:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:45.122 16:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:45.122 16:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:45.122 16:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:45.122 16:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:45.122 16:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:45.122 16:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:45.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.384 16:53:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:45.384 16:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:45.384 16:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:45.384 16:53:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.384 [2024-05-15 16:53:24.064886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.384 16:53:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:47.301 16:53:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:47.301 16:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:47.301 16:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:47.301 16:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:47.301 16:53:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:49.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.217 [2024-05-15 16:53:27.823832] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.217 16:53:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:50.603 16:53:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:50.603 16:53:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:50.603 16:53:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:50.603 16:53:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:50.603 16:53:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:52.518 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:52.518 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:52.518 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:52.518 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:52.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.779 [2024-05-15 16:53:31.531003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.779 16:53:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.700 16:53:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.700 16:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:54.700 16:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.700 16:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:54.700 16:53:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.617 [2024-05-15 16:53:35.293661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.617 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 [2024-05-15 16:53:35.353816] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 [2024-05-15 16:53:35.413976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.618 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.880 [2024-05-15 16:53:35.470142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.880 [2024-05-15 16:53:35.534343] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.880 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:56.881 "tick_rate": 2400000000, 00:08:56.881 "poll_groups": [ 00:08:56.881 { 00:08:56.881 "name": "nvmf_tgt_poll_group_000", 00:08:56.881 "admin_qpairs": 0, 00:08:56.881 "io_qpairs": 224, 00:08:56.881 "current_admin_qpairs": 0, 00:08:56.881 "current_io_qpairs": 0, 00:08:56.881 "pending_bdev_io": 0, 00:08:56.881 "completed_nvme_io": 422, 00:08:56.881 "transports": [ 00:08:56.881 { 00:08:56.881 "trtype": "TCP" 00:08:56.881 } 00:08:56.881 ] 00:08:56.881 }, 00:08:56.881 { 00:08:56.881 "name": "nvmf_tgt_poll_group_001", 00:08:56.881 "admin_qpairs": 1, 00:08:56.881 "io_qpairs": 223, 00:08:56.881 "current_admin_qpairs": 0, 00:08:56.881 "current_io_qpairs": 0, 00:08:56.881 "pending_bdev_io": 0, 00:08:56.881 "completed_nvme_io": 223, 00:08:56.881 "transports": [ 00:08:56.881 { 00:08:56.881 "trtype": "TCP" 00:08:56.881 } 00:08:56.881 ] 00:08:56.881 }, 00:08:56.881 { 00:08:56.881 "name": "nvmf_tgt_poll_group_002", 00:08:56.881 "admin_qpairs": 6, 00:08:56.881 "io_qpairs": 218, 00:08:56.881 "current_admin_qpairs": 0, 00:08:56.881 "current_io_qpairs": 0, 00:08:56.881 "pending_bdev_io": 0, 00:08:56.881 "completed_nvme_io": 316, 00:08:56.881 "transports": [ 00:08:56.881 { 00:08:56.881 "trtype": "TCP" 00:08:56.881 } 00:08:56.881 ] 00:08:56.881 }, 00:08:56.881 { 00:08:56.881 "name": "nvmf_tgt_poll_group_003", 00:08:56.881 "admin_qpairs": 0, 00:08:56.881 "io_qpairs": 224, 00:08:56.881 "current_admin_qpairs": 0, 00:08:56.881 "current_io_qpairs": 0, 00:08:56.881 "pending_bdev_io": 0, 00:08:56.881 "completed_nvme_io": 278, 00:08:56.881 "transports": [ 00:08:56.881 { 00:08:56.881 "trtype": "TCP" 00:08:56.881 } 00:08:56.881 ] 00:08:56.881 } 00:08:56.881 ] 00:08:56.881 }' 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.881 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.881 rmmod nvme_tcp 00:08:57.143 rmmod nvme_fabrics 00:08:57.143 rmmod nvme_keyring 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1309473 ']' 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1309473 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 1309473 ']' 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 1309473 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1309473 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1309473' 00:08:57.143 killing process with pid 1309473 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 1309473 00:08:57.143 [2024-05-15 16:53:35.832833] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 1309473 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.143 16:53:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.701 16:53:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:59.701 00:08:59.701 real 0m37.509s 00:08:59.701 user 1m53.432s 00:08:59.701 sys 0m7.249s 00:08:59.701 16:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:59.701 16:53:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.701 ************************************ 00:08:59.701 END TEST nvmf_rpc 00:08:59.701 ************************************ 00:08:59.701 16:53:38 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:59.701 16:53:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:59.701 16:53:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:59.701 16:53:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:59.701 ************************************ 00:08:59.701 START TEST nvmf_invalid 00:08:59.701 ************************************ 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:59.701 * Looking for test storage... 00:08:59.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:59.701 16:53:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:06.291 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:06.291 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.291 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:06.292 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:06.292 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:09:06.292 00:09:06.292 --- 10.0.0.2 ping statistics --- 00:09:06.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.292 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:09:06.292 00:09:06.292 --- 10.0.0.1 ping statistics --- 00:09:06.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.292 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1319224 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1319224 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 1319224 ']' 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:06.292 16:53:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:06.292 [2024-05-15 16:53:44.939303] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:09:06.292 [2024-05-15 16:53:44.939353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.292 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.292 [2024-05-15 16:53:45.005085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.292 [2024-05-15 16:53:45.071098] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.292 [2024-05-15 16:53:45.071136] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.292 [2024-05-15 16:53:45.071144] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.292 [2024-05-15 16:53:45.071150] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.292 [2024-05-15 16:53:45.071155] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.292 [2024-05-15 16:53:45.071293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.292 [2024-05-15 16:53:45.071407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.292 [2024-05-15 16:53:45.071583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.292 [2024-05-15 16:53:45.071584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24584 00:09:07.235 [2024-05-15 16:53:45.893493] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:07.235 { 00:09:07.235 "nqn": "nqn.2016-06.io.spdk:cnode24584", 00:09:07.235 "tgt_name": "foobar", 00:09:07.235 "method": "nvmf_create_subsystem", 00:09:07.235 "req_id": 1 00:09:07.235 } 00:09:07.235 Got JSON-RPC error response 00:09:07.235 response: 00:09:07.235 { 00:09:07.235 "code": -32603, 00:09:07.235 "message": "Unable to find target foobar" 00:09:07.235 }' 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:07.235 { 00:09:07.235 "nqn": "nqn.2016-06.io.spdk:cnode24584", 00:09:07.235 "tgt_name": "foobar", 00:09:07.235 "method": "nvmf_create_subsystem", 00:09:07.235 "req_id": 1 00:09:07.235 } 00:09:07.235 Got JSON-RPC error response 00:09:07.235 response: 00:09:07.235 { 00:09:07.235 "code": -32603, 00:09:07.235 "message": "Unable to find target foobar" 00:09:07.235 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:07.235 16:53:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6501 00:09:07.497 [2024-05-15 16:53:46.070100] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6501: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:07.497 { 00:09:07.497 "nqn": "nqn.2016-06.io.spdk:cnode6501", 00:09:07.497 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:07.497 "method": "nvmf_create_subsystem", 00:09:07.497 "req_id": 1 00:09:07.497 } 00:09:07.497 Got JSON-RPC error response 00:09:07.497 response: 00:09:07.497 { 00:09:07.497 "code": -32602, 00:09:07.497 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:07.497 }' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:07.497 { 00:09:07.497 "nqn": "nqn.2016-06.io.spdk:cnode6501", 00:09:07.497 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:07.497 "method": "nvmf_create_subsystem", 00:09:07.497 "req_id": 1 00:09:07.497 } 00:09:07.497 Got JSON-RPC error response 00:09:07.497 response: 00:09:07.497 { 00:09:07.497 "code": -32602, 00:09:07.497 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:07.497 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17006 00:09:07.497 [2024-05-15 16:53:46.250719] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17006: invalid model number 'SPDK_Controller' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:07.497 { 00:09:07.497 "nqn": "nqn.2016-06.io.spdk:cnode17006", 00:09:07.497 "model_number": "SPDK_Controller\u001f", 00:09:07.497 "method": "nvmf_create_subsystem", 00:09:07.497 "req_id": 1 00:09:07.497 } 00:09:07.497 Got JSON-RPC error response 00:09:07.497 response: 00:09:07.497 { 00:09:07.497 "code": -32602, 00:09:07.497 "message": "Invalid MN SPDK_Controller\u001f" 00:09:07.497 }' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:07.497 { 00:09:07.497 "nqn": "nqn.2016-06.io.spdk:cnode17006", 00:09:07.497 "model_number": "SPDK_Controller\u001f", 00:09:07.497 "method": "nvmf_create_subsystem", 00:09:07.497 "req_id": 1 00:09:07.497 } 00:09:07.497 Got JSON-RPC error response 00:09:07.497 response: 00:09:07.497 { 00:09:07.497 "code": -32602, 00:09:07.497 "message": "Invalid MN SPDK_Controller\u001f" 00:09:07.497 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.497 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.498 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:07.758 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'k0g\'\''!QXh&TkOkTgo>L]' 00:09:07.759 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'k0g\'\''!QXh&TkOkTgo>L]' nqn.2016-06.io.spdk:cnode15400 00:09:07.759 [2024-05-15 16:53:46.579717] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15400: invalid serial number 'k0g\'!QXh&TkOkTgo>L]' 00:09:08.020 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:08.020 { 00:09:08.020 "nqn": "nqn.2016-06.io.spdk:cnode15400", 00:09:08.020 "serial_number": "k0\u007fg\\'\''!QXh&TkOkTgo>L]", 00:09:08.020 "method": "nvmf_create_subsystem", 00:09:08.020 "req_id": 1 00:09:08.020 } 00:09:08.020 Got JSON-RPC error response 00:09:08.020 response: 00:09:08.020 { 00:09:08.020 "code": -32602, 00:09:08.020 "message": "Invalid SN k0\u007fg\\'\''!QXh&TkOkTgo>L]" 00:09:08.020 }' 00:09:08.020 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:08.020 { 00:09:08.020 "nqn": "nqn.2016-06.io.spdk:cnode15400", 00:09:08.020 "serial_number": "k0\u007fg\\'!QXh&TkOkTgo>L]", 00:09:08.020 "method": "nvmf_create_subsystem", 00:09:08.020 "req_id": 1 00:09:08.020 } 00:09:08.020 Got JSON-RPC error response 00:09:08.020 response: 00:09:08.020 { 00:09:08.020 "code": -32602, 00:09:08.020 "message": "Invalid SN k0\u007fg\\'!QXh&TkOkTgo>L]" 00:09:08.020 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:08.020 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:08.020 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:08.021 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.022 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'T%@?q,'\''"KBmfaRoX A q[t9>lZ;=eAR2HW3Qac+~l' 00:09:08.284 16:53:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'T%@?q,'\''"KBmfaRoX A q[t9>lZ;=eAR2HW3Qac+~l' nqn.2016-06.io.spdk:cnode23977 00:09:08.284 [2024-05-15 16:53:47.061277] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23977: invalid model number 'T%@?q,'"KBmfaRoX A q[t9>lZ;=eAR2HW3Qac+~l' 00:09:08.284 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:08.284 { 00:09:08.284 "nqn": "nqn.2016-06.io.spdk:cnode23977", 00:09:08.284 "model_number": "T%@?q,'\''\"KBmfaRoX A q[t9>lZ;=eAR2HW3Qac+~l", 00:09:08.284 "method": "nvmf_create_subsystem", 00:09:08.284 "req_id": 1 00:09:08.284 } 00:09:08.284 Got JSON-RPC error response 00:09:08.284 response: 00:09:08.284 { 00:09:08.284 "code": -32602, 00:09:08.284 "message": "Invalid MN T%@?q,'\''\"KBmfaRoX A q[t9>lZ;=eAR2HW3Qac+~l" 00:09:08.284 }' 00:09:08.284 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:08.284 { 00:09:08.284 "nqn": "nqn.2016-06.io.spdk:cnode23977", 00:09:08.284 "model_number": "T%@?q,'\"KBmfaRoX A q[t9>lZ;=eAR2HW3Qac+~l", 00:09:08.284 "method": "nvmf_create_subsystem", 00:09:08.284 "req_id": 1 00:09:08.284 } 00:09:08.284 Got JSON-RPC error response 00:09:08.284 response: 00:09:08.284 { 00:09:08.284 "code": -32602, 00:09:08.284 "message": "Invalid MN T%@?q,'\"KBmfaRoX A q[t9>lZ;=eAR2HW3Qac+~l" 00:09:08.284 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:08.284 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:08.545 [2024-05-15 16:53:47.233907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.545 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:08.806 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:08.806 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:08.806 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:08.806 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:08.807 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:08.807 [2024-05-15 16:53:47.586973] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:08.807 [2024-05-15 16:53:47.587035] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:08.807 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:08.807 { 00:09:08.807 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:08.807 "listen_address": { 00:09:08.807 "trtype": "tcp", 00:09:08.807 "traddr": "", 00:09:08.807 "trsvcid": "4421" 00:09:08.807 }, 00:09:08.807 "method": "nvmf_subsystem_remove_listener", 00:09:08.807 "req_id": 1 00:09:08.807 } 00:09:08.807 Got JSON-RPC error response 00:09:08.807 response: 00:09:08.807 { 00:09:08.807 "code": -32602, 00:09:08.807 "message": "Invalid parameters" 00:09:08.807 }' 00:09:08.807 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:08.807 { 00:09:08.807 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:08.807 "listen_address": { 00:09:08.807 "trtype": "tcp", 00:09:08.807 "traddr": "", 00:09:08.807 "trsvcid": "4421" 00:09:08.807 }, 00:09:08.807 "method": "nvmf_subsystem_remove_listener", 00:09:08.807 "req_id": 1 00:09:08.807 } 00:09:08.807 Got JSON-RPC error response 00:09:08.807 response: 00:09:08.807 { 00:09:08.807 "code": -32602, 00:09:08.807 "message": "Invalid parameters" 00:09:08.807 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:08.807 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12249 -i 0 00:09:09.067 [2024-05-15 16:53:47.763524] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12249: invalid cntlid range [0-65519] 00:09:09.067 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:09.067 { 00:09:09.068 "nqn": "nqn.2016-06.io.spdk:cnode12249", 00:09:09.068 "min_cntlid": 0, 00:09:09.068 "method": "nvmf_create_subsystem", 00:09:09.068 "req_id": 1 00:09:09.068 } 00:09:09.068 Got JSON-RPC error response 00:09:09.068 response: 00:09:09.068 { 00:09:09.068 "code": -32602, 00:09:09.068 "message": "Invalid cntlid range [0-65519]" 00:09:09.068 }' 00:09:09.068 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:09.068 { 00:09:09.068 "nqn": "nqn.2016-06.io.spdk:cnode12249", 00:09:09.068 "min_cntlid": 0, 00:09:09.068 "method": "nvmf_create_subsystem", 00:09:09.068 "req_id": 1 00:09:09.068 } 00:09:09.068 Got JSON-RPC error response 00:09:09.068 response: 00:09:09.068 { 00:09:09.068 "code": -32602, 00:09:09.068 "message": "Invalid cntlid range [0-65519]" 00:09:09.068 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:09.068 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24720 -i 65520 00:09:09.328 [2024-05-15 16:53:47.940106] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24720: invalid cntlid range [65520-65519] 00:09:09.328 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:09.328 { 00:09:09.328 "nqn": "nqn.2016-06.io.spdk:cnode24720", 00:09:09.328 "min_cntlid": 65520, 00:09:09.328 "method": "nvmf_create_subsystem", 00:09:09.328 "req_id": 1 00:09:09.328 } 00:09:09.328 Got JSON-RPC error response 00:09:09.328 response: 00:09:09.328 { 00:09:09.328 "code": -32602, 00:09:09.328 "message": "Invalid cntlid range [65520-65519]" 00:09:09.328 }' 00:09:09.328 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:09.328 { 00:09:09.328 "nqn": "nqn.2016-06.io.spdk:cnode24720", 00:09:09.328 "min_cntlid": 65520, 00:09:09.328 "method": "nvmf_create_subsystem", 00:09:09.328 "req_id": 1 00:09:09.328 } 00:09:09.328 Got JSON-RPC error response 00:09:09.328 response: 00:09:09.328 { 00:09:09.328 "code": -32602, 00:09:09.328 "message": "Invalid cntlid range [65520-65519]" 00:09:09.328 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:09.328 16:53:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12233 -I 0 00:09:09.328 [2024-05-15 16:53:48.108643] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12233: invalid cntlid range [1-0] 00:09:09.328 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:09.328 { 00:09:09.328 "nqn": "nqn.2016-06.io.spdk:cnode12233", 00:09:09.328 "max_cntlid": 0, 00:09:09.328 "method": "nvmf_create_subsystem", 00:09:09.328 "req_id": 1 00:09:09.328 } 00:09:09.328 Got JSON-RPC error response 00:09:09.328 response: 00:09:09.328 { 00:09:09.328 "code": -32602, 00:09:09.328 "message": "Invalid cntlid range [1-0]" 00:09:09.328 }' 00:09:09.328 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:09.328 { 00:09:09.329 "nqn": "nqn.2016-06.io.spdk:cnode12233", 00:09:09.329 "max_cntlid": 0, 00:09:09.329 "method": "nvmf_create_subsystem", 00:09:09.329 "req_id": 1 00:09:09.329 } 00:09:09.329 Got JSON-RPC error response 00:09:09.329 response: 00:09:09.329 { 00:09:09.329 "code": -32602, 00:09:09.329 "message": "Invalid cntlid range [1-0]" 00:09:09.329 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:09.329 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1166 -I 65520 00:09:09.590 [2024-05-15 16:53:48.285218] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1166: invalid cntlid range [1-65520] 00:09:09.590 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:09.590 { 00:09:09.590 "nqn": "nqn.2016-06.io.spdk:cnode1166", 00:09:09.590 "max_cntlid": 65520, 00:09:09.590 "method": "nvmf_create_subsystem", 00:09:09.590 "req_id": 1 00:09:09.590 } 00:09:09.590 Got JSON-RPC error response 00:09:09.590 response: 00:09:09.590 { 00:09:09.590 "code": -32602, 00:09:09.590 "message": "Invalid cntlid range [1-65520]" 00:09:09.590 }' 00:09:09.590 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:09.590 { 00:09:09.590 "nqn": "nqn.2016-06.io.spdk:cnode1166", 00:09:09.590 "max_cntlid": 65520, 00:09:09.590 "method": "nvmf_create_subsystem", 00:09:09.590 "req_id": 1 00:09:09.590 } 00:09:09.590 Got JSON-RPC error response 00:09:09.590 response: 00:09:09.590 { 00:09:09.590 "code": -32602, 00:09:09.590 "message": "Invalid cntlid range [1-65520]" 00:09:09.590 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:09.590 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29064 -i 6 -I 5 00:09:09.851 [2024-05-15 16:53:48.453774] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29064: invalid cntlid range [6-5] 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:09.851 { 00:09:09.851 "nqn": "nqn.2016-06.io.spdk:cnode29064", 00:09:09.851 "min_cntlid": 6, 00:09:09.851 "max_cntlid": 5, 00:09:09.851 "method": "nvmf_create_subsystem", 00:09:09.851 "req_id": 1 00:09:09.851 } 00:09:09.851 Got JSON-RPC error response 00:09:09.851 response: 00:09:09.851 { 00:09:09.851 "code": -32602, 00:09:09.851 "message": "Invalid cntlid range [6-5]" 00:09:09.851 }' 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:09.851 { 00:09:09.851 "nqn": "nqn.2016-06.io.spdk:cnode29064", 00:09:09.851 "min_cntlid": 6, 00:09:09.851 "max_cntlid": 5, 00:09:09.851 "method": "nvmf_create_subsystem", 00:09:09.851 "req_id": 1 00:09:09.851 } 00:09:09.851 Got JSON-RPC error response 00:09:09.851 response: 00:09:09.851 { 00:09:09.851 "code": -32602, 00:09:09.851 "message": "Invalid cntlid range [6-5]" 00:09:09.851 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:09.851 { 00:09:09.851 "name": "foobar", 00:09:09.851 "method": "nvmf_delete_target", 00:09:09.851 "req_id": 1 00:09:09.851 } 00:09:09.851 Got JSON-RPC error response 00:09:09.851 response: 00:09:09.851 { 00:09:09.851 "code": -32602, 00:09:09.851 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:09.851 }' 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:09.851 { 00:09:09.851 "name": "foobar", 00:09:09.851 "method": "nvmf_delete_target", 00:09:09.851 "req_id": 1 00:09:09.851 } 00:09:09.851 Got JSON-RPC error response 00:09:09.851 response: 00:09:09.851 { 00:09:09.851 "code": -32602, 00:09:09.851 "message": "The specified target doesn't exist, cannot delete it." 00:09:09.851 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:09.851 rmmod nvme_tcp 00:09:09.851 rmmod nvme_fabrics 00:09:09.851 rmmod nvme_keyring 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1319224 ']' 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1319224 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 1319224 ']' 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 1319224 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:09.851 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1319224 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1319224' 00:09:10.113 killing process with pid 1319224 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 1319224 00:09:10.113 [2024-05-15 16:53:48.725862] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 1319224 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.113 16:53:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.659 16:53:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:12.659 00:09:12.659 real 0m12.837s 00:09:12.659 user 0m19.011s 00:09:12.659 sys 0m5.990s 00:09:12.659 16:53:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:12.659 16:53:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:12.659 ************************************ 00:09:12.659 END TEST nvmf_invalid 00:09:12.659 ************************************ 00:09:12.659 16:53:50 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:12.659 16:53:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:12.659 16:53:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:12.659 16:53:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:12.659 ************************************ 00:09:12.659 START TEST nvmf_abort 00:09:12.659 ************************************ 00:09:12.659 16:53:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:12.659 * Looking for test storage... 00:09:12.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:12.659 16:53:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:19.248 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.248 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:19.248 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:19.248 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:19.248 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:19.248 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:19.248 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:19.248 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:19.248 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:19.248 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:19.248 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:19.249 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:19.249 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:19.249 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:19.249 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:19.249 16:53:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:19.249 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:19.249 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:19.249 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:19.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:09:19.249 00:09:19.249 --- 10.0.0.2 ping statistics --- 00:09:19.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.249 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:09:19.249 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:19.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:09:19.510 00:09:19.510 --- 10.0.0.1 ping statistics --- 00:09:19.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.510 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:19.510 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:19.511 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1324245 00:09:19.511 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1324245 00:09:19.511 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:19.511 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 1324245 ']' 00:09:19.511 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.511 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:19.511 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.511 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:19.511 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:19.511 [2024-05-15 16:53:58.186194] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:09:19.511 [2024-05-15 16:53:58.186259] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.511 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.511 [2024-05-15 16:53:58.275689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:19.833 [2024-05-15 16:53:58.370409] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.833 [2024-05-15 16:53:58.370466] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.833 [2024-05-15 16:53:58.370479] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.833 [2024-05-15 16:53:58.370486] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.833 [2024-05-15 16:53:58.370492] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.833 [2024-05-15 16:53:58.370621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.833 [2024-05-15 16:53:58.370793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.833 [2024-05-15 16:53:58.370887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.160 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:20.160 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:09:20.160 16:53:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.160 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.160 16:53:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.421 [2024-05-15 16:53:59.018742] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.421 Malloc0 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.421 Delay0 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.421 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.422 [2024-05-15 16:53:59.100768] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:20.422 [2024-05-15 16:53:59.100993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.422 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.422 16:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.422 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.422 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.422 16:53:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.422 16:53:59 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:20.422 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.422 [2024-05-15 16:53:59.210701] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:22.966 Initializing NVMe Controllers 00:09:22.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:22.966 controller IO queue size 128 less than required 00:09:22.966 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:22.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:22.966 Initialization complete. Launching workers. 00:09:22.966 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34742 00:09:22.966 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34803, failed to submit 62 00:09:22.966 success 34746, unsuccess 57, failed 0 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:22.966 rmmod nvme_tcp 00:09:22.966 rmmod nvme_fabrics 00:09:22.966 rmmod nvme_keyring 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:22.966 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1324245 ']' 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1324245 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 1324245 ']' 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 1324245 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1324245 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1324245' 00:09:22.967 killing process with pid 1324245 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 1324245 00:09:22.967 [2024-05-15 16:54:01.382459] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 1324245 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.967 16:54:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.879 16:54:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:24.879 00:09:24.879 real 0m12.612s 00:09:24.879 user 0m13.257s 00:09:24.879 sys 0m5.997s 00:09:24.879 16:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:24.879 16:54:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.879 ************************************ 00:09:24.879 END TEST nvmf_abort 00:09:24.879 ************************************ 00:09:24.879 16:54:03 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:24.879 16:54:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:24.879 16:54:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:24.879 16:54:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.879 ************************************ 00:09:24.879 START TEST nvmf_ns_hotplug_stress 00:09:24.879 ************************************ 00:09:24.879 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:25.140 * Looking for test storage... 00:09:25.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.140 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.140 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:25.140 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.140 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.140 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.140 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:25.141 16:54:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:33.283 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:33.283 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.283 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:33.283 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:33.284 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:33.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:09:33.284 00:09:33.284 --- 10.0.0.2 ping statistics --- 00:09:33.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.284 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:09:33.284 00:09:33.284 --- 10.0.0.1 ping statistics --- 00:09:33.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.284 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.284 16:54:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1329576 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1329576 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 1329576 ']' 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.284 [2024-05-15 16:54:11.069656] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:09:33.284 [2024-05-15 16:54:11.069722] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.284 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.284 [2024-05-15 16:54:11.158464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:33.284 [2024-05-15 16:54:11.251193] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.284 [2024-05-15 16:54:11.251246] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.284 [2024-05-15 16:54:11.251254] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.284 [2024-05-15 16:54:11.251261] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.284 [2024-05-15 16:54:11.251267] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.284 [2024-05-15 16:54:11.251402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.284 [2024-05-15 16:54:11.251590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.284 [2024-05-15 16:54:11.251645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:33.284 16:54:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:33.284 [2024-05-15 16:54:12.037139] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.284 16:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.545 16:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.804 [2024-05-15 16:54:12.382421] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:33.804 [2024-05-15 16:54:12.382662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.804 16:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:33.804 16:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:34.064 Malloc0 00:09:34.064 16:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:34.064 Delay0 00:09:34.324 16:54:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.324 16:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:34.584 NULL1 00:09:34.584 16:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:34.584 16:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1329993 00:09:34.584 16:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:34.584 16:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:34.584 16:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.584 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.845 16:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.106 16:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:35.106 16:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:35.106 [2024-05-15 16:54:13.855240] bdev.c:4995:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:09:35.106 true 00:09:35.106 16:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:35.106 16:54:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.368 16:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.628 16:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:35.628 16:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:35.628 true 00:09:35.628 16:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:35.628 16:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.889 16:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.150 16:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:36.150 16:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:36.150 true 00:09:36.150 16:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:36.150 16:54:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.411 16:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.673 16:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:36.673 16:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:36.673 true 00:09:36.673 16:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:36.673 16:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.936 16:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.936 16:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:36.936 16:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:37.197 true 00:09:37.197 16:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:37.197 16:54:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.457 16:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.457 16:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:37.457 16:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:37.717 true 00:09:37.717 16:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:37.717 16:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.978 16:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.978 16:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:37.978 16:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:38.238 true 00:09:38.238 16:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:38.238 16:54:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.499 16:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.499 16:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:38.499 16:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:38.761 true 00:09:38.761 16:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:38.761 16:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.022 16:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.022 16:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:39.022 16:54:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:39.282 true 00:09:39.282 16:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:39.282 16:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.543 16:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.543 16:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:39.543 16:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:39.804 true 00:09:39.804 16:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:39.804 16:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.065 16:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.065 16:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:40.065 16:54:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:40.326 true 00:09:40.326 16:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:40.326 16:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.586 16:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.586 16:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:40.586 16:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:40.846 true 00:09:40.846 16:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:40.846 16:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.107 16:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.107 16:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:41.107 16:54:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:41.367 true 00:09:41.367 16:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:41.367 16:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.626 16:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.626 16:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:41.626 16:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:41.886 true 00:09:41.886 16:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:41.886 16:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.146 16:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.146 16:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:42.146 16:54:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:42.407 true 00:09:42.407 16:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:42.407 16:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.668 16:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.668 16:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:42.668 16:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:42.930 true 00:09:42.930 16:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:42.930 16:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.192 16:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.192 16:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:43.192 16:54:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:43.452 true 00:09:43.452 16:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:43.452 16:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.713 16:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.713 16:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:43.713 16:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:43.974 true 00:09:43.974 16:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:43.974 16:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.234 16:54:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.234 16:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:44.234 16:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:44.495 true 00:09:44.495 16:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:44.495 16:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.755 16:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.755 16:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:44.755 16:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:45.015 true 00:09:45.015 16:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:45.015 16:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.275 16:54:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.275 16:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:45.275 16:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:45.535 true 00:09:45.535 16:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:45.535 16:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.795 16:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.795 16:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:45.795 16:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:46.055 true 00:09:46.055 16:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:46.055 16:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.315 16:54:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.315 16:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:46.315 16:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:46.575 true 00:09:46.575 16:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:46.575 16:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.836 16:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.836 16:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:46.836 16:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:47.095 true 00:09:47.095 16:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:47.095 16:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.095 16:54:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.355 16:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:47.355 16:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:47.615 true 00:09:47.615 16:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:47.615 16:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.615 16:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.875 16:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:47.875 16:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:48.136 true 00:09:48.136 16:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:48.136 16:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.136 16:54:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.397 16:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:48.397 16:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:48.658 true 00:09:48.658 16:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:48.658 16:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.658 16:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.918 16:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:48.918 16:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:49.179 true 00:09:49.179 16:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:49.179 16:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.179 16:54:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.439 16:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:49.439 16:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:49.704 true 00:09:49.704 16:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:49.704 16:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.704 16:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.018 16:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:50.018 16:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:50.018 true 00:09:50.301 16:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:50.301 16:54:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.301 16:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.562 16:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:50.562 16:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:50.562 true 00:09:50.562 16:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:50.562 16:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.822 16:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.082 16:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:51.082 16:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:51.082 true 00:09:51.082 16:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:51.082 16:54:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.342 16:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.603 16:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:51.603 16:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:51.603 true 00:09:51.603 16:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:51.603 16:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.863 16:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.123 16:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:52.123 16:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:52.123 true 00:09:52.123 16:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:52.123 16:54:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.384 16:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.644 16:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:52.644 16:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:52.644 true 00:09:52.644 16:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:52.644 16:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.905 16:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.164 16:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:53.164 16:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:53.164 true 00:09:53.164 16:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:53.164 16:54:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.424 16:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.685 16:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:53.685 16:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:53.685 true 00:09:53.685 16:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:53.685 16:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.946 16:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.206 16:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:54.206 16:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:54.206 true 00:09:54.206 16:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:54.206 16:54:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.466 16:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.736 16:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:54.736 16:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:54.736 true 00:09:54.736 16:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:54.736 16:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.996 16:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.996 16:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:09:54.996 16:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:55.257 true 00:09:55.257 16:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:55.257 16:54:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.524 16:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.524 16:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:09:55.524 16:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:09:55.785 true 00:09:55.785 16:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:55.785 16:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.045 16:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.045 16:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:09:56.045 16:54:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:09:56.305 true 00:09:56.305 16:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:56.305 16:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.565 16:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.565 16:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:09:56.565 16:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:09:56.825 true 00:09:56.825 16:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:56.825 16:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.085 16:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.085 16:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:09:57.085 16:54:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:09:57.346 true 00:09:57.346 16:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:57.346 16:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.606 16:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.606 16:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:09:57.606 16:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:09:57.868 true 00:09:57.868 16:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:57.868 16:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.129 16:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.129 16:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:09:58.129 16:54:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:09:58.390 true 00:09:58.390 16:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:58.390 16:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.650 16:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.650 16:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:09:58.650 16:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:09:58.911 true 00:09:58.911 16:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:58.911 16:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.171 16:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.171 16:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:09:59.171 16:54:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:09:59.432 true 00:09:59.432 16:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:59.432 16:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.692 16:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.692 16:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:09:59.692 16:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:09:59.952 true 00:09:59.952 16:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:09:59.952 16:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.211 16:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.211 16:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:00.211 16:54:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:00.471 true 00:10:00.471 16:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:10:00.471 16:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.729 16:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.730 16:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:00.730 16:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:00.990 true 00:10:00.990 16:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:10:00.990 16:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.250 16:54:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.250 16:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:01.250 16:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:01.510 true 00:10:01.510 16:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:10:01.510 16:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.770 16:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.770 16:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:01.770 16:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:02.031 true 00:10:02.031 16:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:10:02.031 16:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.290 16:54:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.290 16:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:02.290 16:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:02.550 true 00:10:02.550 16:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:10:02.550 16:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.810 16:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.810 16:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:02.810 16:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:03.069 true 00:10:03.069 16:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:10:03.069 16:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.329 16:54:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.329 16:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:10:03.329 16:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:03.589 true 00:10:03.589 16:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:10:03.589 16:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.849 16:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.849 16:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:10:03.849 16:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:04.109 true 00:10:04.109 16:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:10:04.109 16:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.369 16:54:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.369 16:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:10:04.369 16:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:04.629 true 00:10:04.629 16:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:10:04.629 16:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.888 16:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.888 Initializing NVMe Controllers 00:10:04.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:04.888 Controller IO queue size 128, less than required. 00:10:04.888 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:04.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:04.889 Initialization complete. Launching workers. 00:10:04.889 ======================================================== 00:10:04.889 Latency(us) 00:10:04.889 Device Information : IOPS MiB/s Average min max 00:10:04.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31508.80 15.39 4062.21 1667.73 43484.61 00:10:04.889 ======================================================== 00:10:04.889 Total : 31508.80 15.39 4062.21 1667.73 43484.61 00:10:04.889 00:10:04.889 16:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:10:04.889 16:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:10:05.149 true 00:10:05.149 16:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1329993 00:10:05.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1329993) - No such process 00:10:05.149 16:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1329993 00:10:05.149 16:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.409 16:54:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.409 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:05.409 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:05.409 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:05.409 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.409 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:05.669 null0 00:10:05.669 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.669 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.669 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:05.669 null1 00:10:05.669 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.669 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.669 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:05.929 null2 00:10:05.929 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.929 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.929 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:06.190 null3 00:10:06.190 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.190 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.190 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:06.190 null4 00:10:06.190 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.190 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.190 16:54:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:06.450 null5 00:10:06.450 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.450 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.450 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:06.711 null6 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:06.711 null7 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1336401 1336403 1336405 1336407 1336410 1336412 1336415 1336417 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.711 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.972 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.972 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.972 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.972 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.972 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.972 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.972 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.972 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.233 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.234 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.234 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.234 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.234 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.234 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.234 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.234 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.234 16:54:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.234 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.234 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.495 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.756 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.016 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.278 16:54:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.278 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.540 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.801 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.063 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.323 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.323 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.323 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.323 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.323 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.323 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.323 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.323 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.323 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.323 16:54:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.323 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.584 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.845 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.106 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.367 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.367 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.367 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.367 16:54:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.367 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:10.628 rmmod nvme_tcp 00:10:10.628 rmmod nvme_fabrics 00:10:10.628 rmmod nvme_keyring 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1329576 ']' 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1329576 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 1329576 ']' 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 1329576 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1329576 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1329576' 00:10:10.628 killing process with pid 1329576 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 1329576 00:10:10.628 [2024-05-15 16:54:49.331981] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 1329576 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.628 16:54:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.188 16:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:13.188 00:10:13.188 real 0m47.886s 00:10:13.188 user 3m13.020s 00:10:13.188 sys 0m17.959s 00:10:13.188 16:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.188 16:54:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.188 ************************************ 00:10:13.188 END TEST nvmf_ns_hotplug_stress 00:10:13.188 ************************************ 00:10:13.188 16:54:51 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:13.188 16:54:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:13.188 16:54:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.188 16:54:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.188 ************************************ 00:10:13.188 START TEST nvmf_connect_stress 00:10:13.188 ************************************ 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:13.188 * Looking for test storage... 00:10:13.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.188 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.189 16:54:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:19.857 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:19.857 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:19.857 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:19.857 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:19.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:10:19.857 00:10:19.857 --- 10.0.0.2 ping statistics --- 00:10:19.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.857 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:10:19.857 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:10:19.857 00:10:19.858 --- 10.0.0.1 ping statistics --- 00:10:19.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.858 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:10:19.858 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.858 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:19.858 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.858 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.858 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:19.858 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:19.858 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.858 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:19.858 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1341509 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1341509 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 1341509 ']' 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:20.119 16:54:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.119 [2024-05-15 16:54:58.772753] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:10:20.119 [2024-05-15 16:54:58.772817] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.119 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.119 [2024-05-15 16:54:58.859273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.119 [2024-05-15 16:54:58.952476] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.119 [2024-05-15 16:54:58.952531] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.119 [2024-05-15 16:54:58.952540] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.119 [2024-05-15 16:54:58.952556] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.119 [2024-05-15 16:54:58.952563] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.119 [2024-05-15 16:54:58.952688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.119 [2024-05-15 16:54:58.952995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.119 [2024-05-15 16:54:58.952996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.061 [2024-05-15 16:54:59.586176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.061 [2024-05-15 16:54:59.610324] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:21.061 [2024-05-15 16:54:59.610514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.061 NULL1 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1341738 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.061 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.062 16:54:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.323 16:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.323 16:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:21.323 16:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.323 16:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.323 16:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.584 16:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.584 16:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:21.584 16:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.584 16:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.584 16:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.156 16:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.156 16:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:22.156 16:55:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.157 16:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.157 16:55:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.417 16:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.417 16:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:22.417 16:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.417 16:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.417 16:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.678 16:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.678 16:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:22.678 16:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.678 16:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.678 16:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.940 16:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.940 16:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:22.940 16:55:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.940 16:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.940 16:55:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.201 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.201 16:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:23.201 16:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.201 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.201 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.771 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.771 16:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:23.771 16:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.771 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.771 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.031 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.031 16:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:24.031 16:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.031 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.031 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.292 16:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:24.292 16:55:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.292 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.292 16:55:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.552 16:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.552 16:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:24.552 16:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.552 16:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.552 16:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.812 16:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.812 16:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:24.812 16:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.812 16:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.812 16:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.381 16:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.381 16:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:25.381 16:55:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.381 16:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.381 16:55:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.641 16:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.641 16:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:25.641 16:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.641 16:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.641 16:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.902 16:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.902 16:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:25.902 16:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.902 16:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.902 16:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.162 16:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.162 16:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:26.162 16:55:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.162 16:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.162 16:55:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.423 16:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.423 16:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:26.423 16:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.423 16:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.423 16:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.993 16:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.993 16:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:26.993 16:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.993 16:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.993 16:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.253 16:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.253 16:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:27.253 16:55:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.253 16:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.253 16:55:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.513 16:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.513 16:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:27.513 16:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.513 16:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.513 16:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.774 16:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.774 16:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:27.774 16:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.774 16:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.774 16:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.346 16:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.346 16:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:28.346 16:55:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.346 16:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.346 16:55:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.606 16:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.606 16:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:28.606 16:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.606 16:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.606 16:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.866 16:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.866 16:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:28.866 16:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.866 16:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.866 16:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.125 16:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.125 16:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:29.125 16:55:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.125 16:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.125 16:55:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.384 16:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.384 16:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:29.384 16:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.384 16:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.384 16:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.956 16:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.956 16:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:29.956 16:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.956 16:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.956 16:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.217 16:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.217 16:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:30.217 16:55:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.217 16:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.217 16:55:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.477 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.477 16:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:30.477 16:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.477 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.477 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.737 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.737 16:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:30.737 16:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.737 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.737 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.996 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1341738 00:10:30.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1341738) - No such process 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1341738 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.996 16:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:30.996 rmmod nvme_tcp 00:10:31.256 rmmod nvme_fabrics 00:10:31.256 rmmod nvme_keyring 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1341509 ']' 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1341509 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 1341509 ']' 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 1341509 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1341509 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1341509' 00:10:31.256 killing process with pid 1341509 00:10:31.256 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 1341509 00:10:31.257 [2024-05-15 16:55:09.949922] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:31.257 16:55:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 1341509 00:10:31.257 16:55:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:31.257 16:55:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:31.257 16:55:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:31.257 16:55:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:31.257 16:55:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:31.257 16:55:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.257 16:55:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.257 16:55:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.800 16:55:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:33.800 00:10:33.800 real 0m20.568s 00:10:33.800 user 0m41.948s 00:10:33.800 sys 0m8.483s 00:10:33.800 16:55:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:33.800 16:55:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.800 ************************************ 00:10:33.800 END TEST nvmf_connect_stress 00:10:33.800 ************************************ 00:10:33.800 16:55:12 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:33.800 16:55:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:33.800 16:55:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:33.800 16:55:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:33.800 ************************************ 00:10:33.800 START TEST nvmf_fused_ordering 00:10:33.800 ************************************ 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:33.800 * Looking for test storage... 00:10:33.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.800 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:33.801 16:55:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:40.384 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:40.384 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.384 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:40.385 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:40.385 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.385 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:40.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:10:40.645 00:10:40.645 --- 10.0.0.2 ping statistics --- 00:10:40.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.645 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:10:40.645 00:10:40.645 --- 10.0.0.1 ping statistics --- 00:10:40.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.645 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1347831 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1347831 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 1347831 ']' 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:40.645 16:55:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:40.905 [2024-05-15 16:55:19.499056] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:10:40.905 [2024-05-15 16:55:19.499117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.905 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.905 [2024-05-15 16:55:19.587804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.905 [2024-05-15 16:55:19.679321] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.905 [2024-05-15 16:55:19.679377] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.905 [2024-05-15 16:55:19.679385] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.905 [2024-05-15 16:55:19.679392] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.905 [2024-05-15 16:55:19.679398] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.905 [2024-05-15 16:55:19.679430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.476 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:41.476 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:10:41.476 16:55:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.476 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.476 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.737 [2024-05-15 16:55:20.358713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.737 [2024-05-15 16:55:20.382707] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:41.737 [2024-05-15 16:55:20.382971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:41.737 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.738 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.738 NULL1 00:10:41.738 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.738 16:55:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:41.738 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.738 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.738 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.738 16:55:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:41.738 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.738 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.738 16:55:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.738 16:55:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:41.738 [2024-05-15 16:55:20.450763] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:10:41.738 [2024-05-15 16:55:20.450806] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1348046 ] 00:10:41.738 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.308 Attached to nqn.2016-06.io.spdk:cnode1 00:10:42.308 Namespace ID: 1 size: 1GB 00:10:42.308 fused_ordering(0) 00:10:42.308 fused_ordering(1) 00:10:42.308 fused_ordering(2) 00:10:42.308 fused_ordering(3) 00:10:42.308 fused_ordering(4) 00:10:42.308 fused_ordering(5) 00:10:42.308 fused_ordering(6) 00:10:42.308 fused_ordering(7) 00:10:42.308 fused_ordering(8) 00:10:42.308 fused_ordering(9) 00:10:42.308 fused_ordering(10) 00:10:42.308 fused_ordering(11) 00:10:42.308 fused_ordering(12) 00:10:42.308 fused_ordering(13) 00:10:42.308 fused_ordering(14) 00:10:42.308 fused_ordering(15) 00:10:42.308 fused_ordering(16) 00:10:42.308 fused_ordering(17) 00:10:42.308 fused_ordering(18) 00:10:42.308 fused_ordering(19) 00:10:42.308 fused_ordering(20) 00:10:42.308 fused_ordering(21) 00:10:42.308 fused_ordering(22) 00:10:42.308 fused_ordering(23) 00:10:42.308 fused_ordering(24) 00:10:42.308 fused_ordering(25) 00:10:42.308 fused_ordering(26) 00:10:42.308 fused_ordering(27) 00:10:42.308 fused_ordering(28) 00:10:42.308 fused_ordering(29) 00:10:42.308 fused_ordering(30) 00:10:42.308 fused_ordering(31) 00:10:42.308 fused_ordering(32) 00:10:42.308 fused_ordering(33) 00:10:42.308 fused_ordering(34) 00:10:42.308 fused_ordering(35) 00:10:42.308 fused_ordering(36) 00:10:42.308 fused_ordering(37) 00:10:42.308 fused_ordering(38) 00:10:42.308 fused_ordering(39) 00:10:42.308 fused_ordering(40) 00:10:42.308 fused_ordering(41) 00:10:42.308 fused_ordering(42) 00:10:42.308 fused_ordering(43) 00:10:42.308 fused_ordering(44) 00:10:42.308 fused_ordering(45) 00:10:42.308 fused_ordering(46) 00:10:42.308 fused_ordering(47) 00:10:42.308 fused_ordering(48) 00:10:42.308 fused_ordering(49) 00:10:42.308 fused_ordering(50) 00:10:42.308 fused_ordering(51) 00:10:42.308 fused_ordering(52) 00:10:42.308 fused_ordering(53) 00:10:42.308 fused_ordering(54) 00:10:42.308 fused_ordering(55) 00:10:42.308 fused_ordering(56) 00:10:42.308 fused_ordering(57) 00:10:42.308 fused_ordering(58) 00:10:42.308 fused_ordering(59) 00:10:42.308 fused_ordering(60) 00:10:42.308 fused_ordering(61) 00:10:42.308 fused_ordering(62) 00:10:42.308 fused_ordering(63) 00:10:42.308 fused_ordering(64) 00:10:42.308 fused_ordering(65) 00:10:42.308 fused_ordering(66) 00:10:42.308 fused_ordering(67) 00:10:42.308 fused_ordering(68) 00:10:42.308 fused_ordering(69) 00:10:42.308 fused_ordering(70) 00:10:42.308 fused_ordering(71) 00:10:42.308 fused_ordering(72) 00:10:42.308 fused_ordering(73) 00:10:42.308 fused_ordering(74) 00:10:42.308 fused_ordering(75) 00:10:42.308 fused_ordering(76) 00:10:42.308 fused_ordering(77) 00:10:42.308 fused_ordering(78) 00:10:42.308 fused_ordering(79) 00:10:42.308 fused_ordering(80) 00:10:42.308 fused_ordering(81) 00:10:42.308 fused_ordering(82) 00:10:42.308 fused_ordering(83) 00:10:42.308 fused_ordering(84) 00:10:42.308 fused_ordering(85) 00:10:42.308 fused_ordering(86) 00:10:42.308 fused_ordering(87) 00:10:42.308 fused_ordering(88) 00:10:42.308 fused_ordering(89) 00:10:42.308 fused_ordering(90) 00:10:42.308 fused_ordering(91) 00:10:42.308 fused_ordering(92) 00:10:42.308 fused_ordering(93) 00:10:42.308 fused_ordering(94) 00:10:42.308 fused_ordering(95) 00:10:42.308 fused_ordering(96) 00:10:42.308 fused_ordering(97) 00:10:42.308 fused_ordering(98) 00:10:42.308 fused_ordering(99) 00:10:42.308 fused_ordering(100) 00:10:42.308 fused_ordering(101) 00:10:42.308 fused_ordering(102) 00:10:42.308 fused_ordering(103) 00:10:42.308 fused_ordering(104) 00:10:42.308 fused_ordering(105) 00:10:42.308 fused_ordering(106) 00:10:42.308 fused_ordering(107) 00:10:42.308 fused_ordering(108) 00:10:42.308 fused_ordering(109) 00:10:42.308 fused_ordering(110) 00:10:42.308 fused_ordering(111) 00:10:42.308 fused_ordering(112) 00:10:42.308 fused_ordering(113) 00:10:42.308 fused_ordering(114) 00:10:42.308 fused_ordering(115) 00:10:42.308 fused_ordering(116) 00:10:42.308 fused_ordering(117) 00:10:42.308 fused_ordering(118) 00:10:42.308 fused_ordering(119) 00:10:42.308 fused_ordering(120) 00:10:42.308 fused_ordering(121) 00:10:42.308 fused_ordering(122) 00:10:42.308 fused_ordering(123) 00:10:42.308 fused_ordering(124) 00:10:42.308 fused_ordering(125) 00:10:42.308 fused_ordering(126) 00:10:42.308 fused_ordering(127) 00:10:42.308 fused_ordering(128) 00:10:42.308 fused_ordering(129) 00:10:42.309 fused_ordering(130) 00:10:42.309 fused_ordering(131) 00:10:42.309 fused_ordering(132) 00:10:42.309 fused_ordering(133) 00:10:42.309 fused_ordering(134) 00:10:42.309 fused_ordering(135) 00:10:42.309 fused_ordering(136) 00:10:42.309 fused_ordering(137) 00:10:42.309 fused_ordering(138) 00:10:42.309 fused_ordering(139) 00:10:42.309 fused_ordering(140) 00:10:42.309 fused_ordering(141) 00:10:42.309 fused_ordering(142) 00:10:42.309 fused_ordering(143) 00:10:42.309 fused_ordering(144) 00:10:42.309 fused_ordering(145) 00:10:42.309 fused_ordering(146) 00:10:42.309 fused_ordering(147) 00:10:42.309 fused_ordering(148) 00:10:42.309 fused_ordering(149) 00:10:42.309 fused_ordering(150) 00:10:42.309 fused_ordering(151) 00:10:42.309 fused_ordering(152) 00:10:42.309 fused_ordering(153) 00:10:42.309 fused_ordering(154) 00:10:42.309 fused_ordering(155) 00:10:42.309 fused_ordering(156) 00:10:42.309 fused_ordering(157) 00:10:42.309 fused_ordering(158) 00:10:42.309 fused_ordering(159) 00:10:42.309 fused_ordering(160) 00:10:42.309 fused_ordering(161) 00:10:42.309 fused_ordering(162) 00:10:42.309 fused_ordering(163) 00:10:42.309 fused_ordering(164) 00:10:42.309 fused_ordering(165) 00:10:42.309 fused_ordering(166) 00:10:42.309 fused_ordering(167) 00:10:42.309 fused_ordering(168) 00:10:42.309 fused_ordering(169) 00:10:42.309 fused_ordering(170) 00:10:42.309 fused_ordering(171) 00:10:42.309 fused_ordering(172) 00:10:42.309 fused_ordering(173) 00:10:42.309 fused_ordering(174) 00:10:42.309 fused_ordering(175) 00:10:42.309 fused_ordering(176) 00:10:42.309 fused_ordering(177) 00:10:42.309 fused_ordering(178) 00:10:42.309 fused_ordering(179) 00:10:42.309 fused_ordering(180) 00:10:42.309 fused_ordering(181) 00:10:42.309 fused_ordering(182) 00:10:42.309 fused_ordering(183) 00:10:42.309 fused_ordering(184) 00:10:42.309 fused_ordering(185) 00:10:42.309 fused_ordering(186) 00:10:42.309 fused_ordering(187) 00:10:42.309 fused_ordering(188) 00:10:42.309 fused_ordering(189) 00:10:42.309 fused_ordering(190) 00:10:42.309 fused_ordering(191) 00:10:42.309 fused_ordering(192) 00:10:42.309 fused_ordering(193) 00:10:42.309 fused_ordering(194) 00:10:42.309 fused_ordering(195) 00:10:42.309 fused_ordering(196) 00:10:42.309 fused_ordering(197) 00:10:42.309 fused_ordering(198) 00:10:42.309 fused_ordering(199) 00:10:42.309 fused_ordering(200) 00:10:42.309 fused_ordering(201) 00:10:42.309 fused_ordering(202) 00:10:42.309 fused_ordering(203) 00:10:42.309 fused_ordering(204) 00:10:42.309 fused_ordering(205) 00:10:42.569 fused_ordering(206) 00:10:42.569 fused_ordering(207) 00:10:42.569 fused_ordering(208) 00:10:42.569 fused_ordering(209) 00:10:42.569 fused_ordering(210) 00:10:42.569 fused_ordering(211) 00:10:42.569 fused_ordering(212) 00:10:42.569 fused_ordering(213) 00:10:42.569 fused_ordering(214) 00:10:42.569 fused_ordering(215) 00:10:42.569 fused_ordering(216) 00:10:42.569 fused_ordering(217) 00:10:42.569 fused_ordering(218) 00:10:42.569 fused_ordering(219) 00:10:42.569 fused_ordering(220) 00:10:42.569 fused_ordering(221) 00:10:42.569 fused_ordering(222) 00:10:42.569 fused_ordering(223) 00:10:42.569 fused_ordering(224) 00:10:42.569 fused_ordering(225) 00:10:42.569 fused_ordering(226) 00:10:42.569 fused_ordering(227) 00:10:42.569 fused_ordering(228) 00:10:42.569 fused_ordering(229) 00:10:42.569 fused_ordering(230) 00:10:42.569 fused_ordering(231) 00:10:42.569 fused_ordering(232) 00:10:42.569 fused_ordering(233) 00:10:42.569 fused_ordering(234) 00:10:42.569 fused_ordering(235) 00:10:42.569 fused_ordering(236) 00:10:42.569 fused_ordering(237) 00:10:42.569 fused_ordering(238) 00:10:42.569 fused_ordering(239) 00:10:42.569 fused_ordering(240) 00:10:42.569 fused_ordering(241) 00:10:42.569 fused_ordering(242) 00:10:42.569 fused_ordering(243) 00:10:42.569 fused_ordering(244) 00:10:42.569 fused_ordering(245) 00:10:42.569 fused_ordering(246) 00:10:42.569 fused_ordering(247) 00:10:42.569 fused_ordering(248) 00:10:42.569 fused_ordering(249) 00:10:42.569 fused_ordering(250) 00:10:42.569 fused_ordering(251) 00:10:42.569 fused_ordering(252) 00:10:42.569 fused_ordering(253) 00:10:42.569 fused_ordering(254) 00:10:42.569 fused_ordering(255) 00:10:42.569 fused_ordering(256) 00:10:42.569 fused_ordering(257) 00:10:42.569 fused_ordering(258) 00:10:42.569 fused_ordering(259) 00:10:42.569 fused_ordering(260) 00:10:42.569 fused_ordering(261) 00:10:42.569 fused_ordering(262) 00:10:42.569 fused_ordering(263) 00:10:42.569 fused_ordering(264) 00:10:42.569 fused_ordering(265) 00:10:42.569 fused_ordering(266) 00:10:42.569 fused_ordering(267) 00:10:42.569 fused_ordering(268) 00:10:42.569 fused_ordering(269) 00:10:42.569 fused_ordering(270) 00:10:42.569 fused_ordering(271) 00:10:42.569 fused_ordering(272) 00:10:42.569 fused_ordering(273) 00:10:42.569 fused_ordering(274) 00:10:42.569 fused_ordering(275) 00:10:42.569 fused_ordering(276) 00:10:42.569 fused_ordering(277) 00:10:42.569 fused_ordering(278) 00:10:42.569 fused_ordering(279) 00:10:42.569 fused_ordering(280) 00:10:42.569 fused_ordering(281) 00:10:42.569 fused_ordering(282) 00:10:42.569 fused_ordering(283) 00:10:42.569 fused_ordering(284) 00:10:42.569 fused_ordering(285) 00:10:42.569 fused_ordering(286) 00:10:42.569 fused_ordering(287) 00:10:42.569 fused_ordering(288) 00:10:42.569 fused_ordering(289) 00:10:42.569 fused_ordering(290) 00:10:42.569 fused_ordering(291) 00:10:42.569 fused_ordering(292) 00:10:42.569 fused_ordering(293) 00:10:42.569 fused_ordering(294) 00:10:42.569 fused_ordering(295) 00:10:42.569 fused_ordering(296) 00:10:42.569 fused_ordering(297) 00:10:42.569 fused_ordering(298) 00:10:42.569 fused_ordering(299) 00:10:42.569 fused_ordering(300) 00:10:42.569 fused_ordering(301) 00:10:42.569 fused_ordering(302) 00:10:42.569 fused_ordering(303) 00:10:42.569 fused_ordering(304) 00:10:42.569 fused_ordering(305) 00:10:42.569 fused_ordering(306) 00:10:42.569 fused_ordering(307) 00:10:42.569 fused_ordering(308) 00:10:42.569 fused_ordering(309) 00:10:42.569 fused_ordering(310) 00:10:42.569 fused_ordering(311) 00:10:42.569 fused_ordering(312) 00:10:42.569 fused_ordering(313) 00:10:42.569 fused_ordering(314) 00:10:42.569 fused_ordering(315) 00:10:42.569 fused_ordering(316) 00:10:42.569 fused_ordering(317) 00:10:42.569 fused_ordering(318) 00:10:42.569 fused_ordering(319) 00:10:42.569 fused_ordering(320) 00:10:42.569 fused_ordering(321) 00:10:42.569 fused_ordering(322) 00:10:42.569 fused_ordering(323) 00:10:42.569 fused_ordering(324) 00:10:42.569 fused_ordering(325) 00:10:42.569 fused_ordering(326) 00:10:42.569 fused_ordering(327) 00:10:42.569 fused_ordering(328) 00:10:42.569 fused_ordering(329) 00:10:42.569 fused_ordering(330) 00:10:42.569 fused_ordering(331) 00:10:42.569 fused_ordering(332) 00:10:42.569 fused_ordering(333) 00:10:42.569 fused_ordering(334) 00:10:42.569 fused_ordering(335) 00:10:42.569 fused_ordering(336) 00:10:42.569 fused_ordering(337) 00:10:42.569 fused_ordering(338) 00:10:42.569 fused_ordering(339) 00:10:42.569 fused_ordering(340) 00:10:42.569 fused_ordering(341) 00:10:42.569 fused_ordering(342) 00:10:42.569 fused_ordering(343) 00:10:42.569 fused_ordering(344) 00:10:42.569 fused_ordering(345) 00:10:42.569 fused_ordering(346) 00:10:42.569 fused_ordering(347) 00:10:42.570 fused_ordering(348) 00:10:42.570 fused_ordering(349) 00:10:42.570 fused_ordering(350) 00:10:42.570 fused_ordering(351) 00:10:42.570 fused_ordering(352) 00:10:42.570 fused_ordering(353) 00:10:42.570 fused_ordering(354) 00:10:42.570 fused_ordering(355) 00:10:42.570 fused_ordering(356) 00:10:42.570 fused_ordering(357) 00:10:42.570 fused_ordering(358) 00:10:42.570 fused_ordering(359) 00:10:42.570 fused_ordering(360) 00:10:42.570 fused_ordering(361) 00:10:42.570 fused_ordering(362) 00:10:42.570 fused_ordering(363) 00:10:42.570 fused_ordering(364) 00:10:42.570 fused_ordering(365) 00:10:42.570 fused_ordering(366) 00:10:42.570 fused_ordering(367) 00:10:42.570 fused_ordering(368) 00:10:42.570 fused_ordering(369) 00:10:42.570 fused_ordering(370) 00:10:42.570 fused_ordering(371) 00:10:42.570 fused_ordering(372) 00:10:42.570 fused_ordering(373) 00:10:42.570 fused_ordering(374) 00:10:42.570 fused_ordering(375) 00:10:42.570 fused_ordering(376) 00:10:42.570 fused_ordering(377) 00:10:42.570 fused_ordering(378) 00:10:42.570 fused_ordering(379) 00:10:42.570 fused_ordering(380) 00:10:42.570 fused_ordering(381) 00:10:42.570 fused_ordering(382) 00:10:42.570 fused_ordering(383) 00:10:42.570 fused_ordering(384) 00:10:42.570 fused_ordering(385) 00:10:42.570 fused_ordering(386) 00:10:42.570 fused_ordering(387) 00:10:42.570 fused_ordering(388) 00:10:42.570 fused_ordering(389) 00:10:42.570 fused_ordering(390) 00:10:42.570 fused_ordering(391) 00:10:42.570 fused_ordering(392) 00:10:42.570 fused_ordering(393) 00:10:42.570 fused_ordering(394) 00:10:42.570 fused_ordering(395) 00:10:42.570 fused_ordering(396) 00:10:42.570 fused_ordering(397) 00:10:42.570 fused_ordering(398) 00:10:42.570 fused_ordering(399) 00:10:42.570 fused_ordering(400) 00:10:42.570 fused_ordering(401) 00:10:42.570 fused_ordering(402) 00:10:42.570 fused_ordering(403) 00:10:42.570 fused_ordering(404) 00:10:42.570 fused_ordering(405) 00:10:42.570 fused_ordering(406) 00:10:42.570 fused_ordering(407) 00:10:42.570 fused_ordering(408) 00:10:42.570 fused_ordering(409) 00:10:42.570 fused_ordering(410) 00:10:42.830 fused_ordering(411) 00:10:42.830 fused_ordering(412) 00:10:42.830 fused_ordering(413) 00:10:42.830 fused_ordering(414) 00:10:42.830 fused_ordering(415) 00:10:42.830 fused_ordering(416) 00:10:42.830 fused_ordering(417) 00:10:42.830 fused_ordering(418) 00:10:42.830 fused_ordering(419) 00:10:42.830 fused_ordering(420) 00:10:42.830 fused_ordering(421) 00:10:42.830 fused_ordering(422) 00:10:42.830 fused_ordering(423) 00:10:42.830 fused_ordering(424) 00:10:42.830 fused_ordering(425) 00:10:42.830 fused_ordering(426) 00:10:42.830 fused_ordering(427) 00:10:42.830 fused_ordering(428) 00:10:42.830 fused_ordering(429) 00:10:42.830 fused_ordering(430) 00:10:42.830 fused_ordering(431) 00:10:42.830 fused_ordering(432) 00:10:42.830 fused_ordering(433) 00:10:42.830 fused_ordering(434) 00:10:42.830 fused_ordering(435) 00:10:42.830 fused_ordering(436) 00:10:42.830 fused_ordering(437) 00:10:42.830 fused_ordering(438) 00:10:42.830 fused_ordering(439) 00:10:42.830 fused_ordering(440) 00:10:42.830 fused_ordering(441) 00:10:42.830 fused_ordering(442) 00:10:42.830 fused_ordering(443) 00:10:42.830 fused_ordering(444) 00:10:42.830 fused_ordering(445) 00:10:42.830 fused_ordering(446) 00:10:42.830 fused_ordering(447) 00:10:42.830 fused_ordering(448) 00:10:42.830 fused_ordering(449) 00:10:42.830 fused_ordering(450) 00:10:42.830 fused_ordering(451) 00:10:42.830 fused_ordering(452) 00:10:42.830 fused_ordering(453) 00:10:42.830 fused_ordering(454) 00:10:42.830 fused_ordering(455) 00:10:42.830 fused_ordering(456) 00:10:42.830 fused_ordering(457) 00:10:42.830 fused_ordering(458) 00:10:42.830 fused_ordering(459) 00:10:42.830 fused_ordering(460) 00:10:42.830 fused_ordering(461) 00:10:42.830 fused_ordering(462) 00:10:42.830 fused_ordering(463) 00:10:42.830 fused_ordering(464) 00:10:42.830 fused_ordering(465) 00:10:42.830 fused_ordering(466) 00:10:42.830 fused_ordering(467) 00:10:42.830 fused_ordering(468) 00:10:42.830 fused_ordering(469) 00:10:42.830 fused_ordering(470) 00:10:42.830 fused_ordering(471) 00:10:42.830 fused_ordering(472) 00:10:42.830 fused_ordering(473) 00:10:42.830 fused_ordering(474) 00:10:42.830 fused_ordering(475) 00:10:42.830 fused_ordering(476) 00:10:42.830 fused_ordering(477) 00:10:42.830 fused_ordering(478) 00:10:42.830 fused_ordering(479) 00:10:42.830 fused_ordering(480) 00:10:42.830 fused_ordering(481) 00:10:42.830 fused_ordering(482) 00:10:42.830 fused_ordering(483) 00:10:42.830 fused_ordering(484) 00:10:42.830 fused_ordering(485) 00:10:42.830 fused_ordering(486) 00:10:42.830 fused_ordering(487) 00:10:42.830 fused_ordering(488) 00:10:42.830 fused_ordering(489) 00:10:42.830 fused_ordering(490) 00:10:42.830 fused_ordering(491) 00:10:42.830 fused_ordering(492) 00:10:42.830 fused_ordering(493) 00:10:42.830 fused_ordering(494) 00:10:42.830 fused_ordering(495) 00:10:42.830 fused_ordering(496) 00:10:42.830 fused_ordering(497) 00:10:42.830 fused_ordering(498) 00:10:42.830 fused_ordering(499) 00:10:42.830 fused_ordering(500) 00:10:42.830 fused_ordering(501) 00:10:42.830 fused_ordering(502) 00:10:42.830 fused_ordering(503) 00:10:42.830 fused_ordering(504) 00:10:42.830 fused_ordering(505) 00:10:42.830 fused_ordering(506) 00:10:42.830 fused_ordering(507) 00:10:42.830 fused_ordering(508) 00:10:42.830 fused_ordering(509) 00:10:42.830 fused_ordering(510) 00:10:42.830 fused_ordering(511) 00:10:42.830 fused_ordering(512) 00:10:42.830 fused_ordering(513) 00:10:42.830 fused_ordering(514) 00:10:42.830 fused_ordering(515) 00:10:42.830 fused_ordering(516) 00:10:42.830 fused_ordering(517) 00:10:42.830 fused_ordering(518) 00:10:42.830 fused_ordering(519) 00:10:42.830 fused_ordering(520) 00:10:42.830 fused_ordering(521) 00:10:42.830 fused_ordering(522) 00:10:42.830 fused_ordering(523) 00:10:42.830 fused_ordering(524) 00:10:42.830 fused_ordering(525) 00:10:42.830 fused_ordering(526) 00:10:42.830 fused_ordering(527) 00:10:42.830 fused_ordering(528) 00:10:42.830 fused_ordering(529) 00:10:42.830 fused_ordering(530) 00:10:42.830 fused_ordering(531) 00:10:42.830 fused_ordering(532) 00:10:42.830 fused_ordering(533) 00:10:42.830 fused_ordering(534) 00:10:42.830 fused_ordering(535) 00:10:42.830 fused_ordering(536) 00:10:42.830 fused_ordering(537) 00:10:42.830 fused_ordering(538) 00:10:42.830 fused_ordering(539) 00:10:42.830 fused_ordering(540) 00:10:42.830 fused_ordering(541) 00:10:42.830 fused_ordering(542) 00:10:42.830 fused_ordering(543) 00:10:42.830 fused_ordering(544) 00:10:42.830 fused_ordering(545) 00:10:42.830 fused_ordering(546) 00:10:42.830 fused_ordering(547) 00:10:42.830 fused_ordering(548) 00:10:42.830 fused_ordering(549) 00:10:42.830 fused_ordering(550) 00:10:42.830 fused_ordering(551) 00:10:42.830 fused_ordering(552) 00:10:42.830 fused_ordering(553) 00:10:42.830 fused_ordering(554) 00:10:42.830 fused_ordering(555) 00:10:42.830 fused_ordering(556) 00:10:42.830 fused_ordering(557) 00:10:42.830 fused_ordering(558) 00:10:42.830 fused_ordering(559) 00:10:42.830 fused_ordering(560) 00:10:42.830 fused_ordering(561) 00:10:42.830 fused_ordering(562) 00:10:42.830 fused_ordering(563) 00:10:42.830 fused_ordering(564) 00:10:42.830 fused_ordering(565) 00:10:42.830 fused_ordering(566) 00:10:42.830 fused_ordering(567) 00:10:42.830 fused_ordering(568) 00:10:42.830 fused_ordering(569) 00:10:42.830 fused_ordering(570) 00:10:42.830 fused_ordering(571) 00:10:42.830 fused_ordering(572) 00:10:42.830 fused_ordering(573) 00:10:42.830 fused_ordering(574) 00:10:42.830 fused_ordering(575) 00:10:42.830 fused_ordering(576) 00:10:42.830 fused_ordering(577) 00:10:42.830 fused_ordering(578) 00:10:42.830 fused_ordering(579) 00:10:42.830 fused_ordering(580) 00:10:42.830 fused_ordering(581) 00:10:42.830 fused_ordering(582) 00:10:42.830 fused_ordering(583) 00:10:42.830 fused_ordering(584) 00:10:42.830 fused_ordering(585) 00:10:42.830 fused_ordering(586) 00:10:42.830 fused_ordering(587) 00:10:42.830 fused_ordering(588) 00:10:42.830 fused_ordering(589) 00:10:42.830 fused_ordering(590) 00:10:42.830 fused_ordering(591) 00:10:42.830 fused_ordering(592) 00:10:42.830 fused_ordering(593) 00:10:42.830 fused_ordering(594) 00:10:42.830 fused_ordering(595) 00:10:42.830 fused_ordering(596) 00:10:42.830 fused_ordering(597) 00:10:42.830 fused_ordering(598) 00:10:42.830 fused_ordering(599) 00:10:42.830 fused_ordering(600) 00:10:42.830 fused_ordering(601) 00:10:42.830 fused_ordering(602) 00:10:42.830 fused_ordering(603) 00:10:42.831 fused_ordering(604) 00:10:42.831 fused_ordering(605) 00:10:42.831 fused_ordering(606) 00:10:42.831 fused_ordering(607) 00:10:42.831 fused_ordering(608) 00:10:42.831 fused_ordering(609) 00:10:42.831 fused_ordering(610) 00:10:42.831 fused_ordering(611) 00:10:42.831 fused_ordering(612) 00:10:42.831 fused_ordering(613) 00:10:42.831 fused_ordering(614) 00:10:42.831 fused_ordering(615) 00:10:43.400 fused_ordering(616) 00:10:43.400 fused_ordering(617) 00:10:43.400 fused_ordering(618) 00:10:43.400 fused_ordering(619) 00:10:43.400 fused_ordering(620) 00:10:43.400 fused_ordering(621) 00:10:43.400 fused_ordering(622) 00:10:43.400 fused_ordering(623) 00:10:43.400 fused_ordering(624) 00:10:43.400 fused_ordering(625) 00:10:43.400 fused_ordering(626) 00:10:43.400 fused_ordering(627) 00:10:43.400 fused_ordering(628) 00:10:43.400 fused_ordering(629) 00:10:43.400 fused_ordering(630) 00:10:43.400 fused_ordering(631) 00:10:43.400 fused_ordering(632) 00:10:43.400 fused_ordering(633) 00:10:43.400 fused_ordering(634) 00:10:43.400 fused_ordering(635) 00:10:43.400 fused_ordering(636) 00:10:43.400 fused_ordering(637) 00:10:43.400 fused_ordering(638) 00:10:43.400 fused_ordering(639) 00:10:43.400 fused_ordering(640) 00:10:43.400 fused_ordering(641) 00:10:43.400 fused_ordering(642) 00:10:43.400 fused_ordering(643) 00:10:43.400 fused_ordering(644) 00:10:43.400 fused_ordering(645) 00:10:43.400 fused_ordering(646) 00:10:43.400 fused_ordering(647) 00:10:43.400 fused_ordering(648) 00:10:43.400 fused_ordering(649) 00:10:43.400 fused_ordering(650) 00:10:43.400 fused_ordering(651) 00:10:43.400 fused_ordering(652) 00:10:43.400 fused_ordering(653) 00:10:43.400 fused_ordering(654) 00:10:43.400 fused_ordering(655) 00:10:43.400 fused_ordering(656) 00:10:43.400 fused_ordering(657) 00:10:43.401 fused_ordering(658) 00:10:43.401 fused_ordering(659) 00:10:43.401 fused_ordering(660) 00:10:43.401 fused_ordering(661) 00:10:43.401 fused_ordering(662) 00:10:43.401 fused_ordering(663) 00:10:43.401 fused_ordering(664) 00:10:43.401 fused_ordering(665) 00:10:43.401 fused_ordering(666) 00:10:43.401 fused_ordering(667) 00:10:43.401 fused_ordering(668) 00:10:43.401 fused_ordering(669) 00:10:43.401 fused_ordering(670) 00:10:43.401 fused_ordering(671) 00:10:43.401 fused_ordering(672) 00:10:43.401 fused_ordering(673) 00:10:43.401 fused_ordering(674) 00:10:43.401 fused_ordering(675) 00:10:43.401 fused_ordering(676) 00:10:43.401 fused_ordering(677) 00:10:43.401 fused_ordering(678) 00:10:43.401 fused_ordering(679) 00:10:43.401 fused_ordering(680) 00:10:43.401 fused_ordering(681) 00:10:43.401 fused_ordering(682) 00:10:43.401 fused_ordering(683) 00:10:43.401 fused_ordering(684) 00:10:43.401 fused_ordering(685) 00:10:43.401 fused_ordering(686) 00:10:43.401 fused_ordering(687) 00:10:43.401 fused_ordering(688) 00:10:43.401 fused_ordering(689) 00:10:43.401 fused_ordering(690) 00:10:43.401 fused_ordering(691) 00:10:43.401 fused_ordering(692) 00:10:43.401 fused_ordering(693) 00:10:43.401 fused_ordering(694) 00:10:43.401 fused_ordering(695) 00:10:43.401 fused_ordering(696) 00:10:43.401 fused_ordering(697) 00:10:43.401 fused_ordering(698) 00:10:43.401 fused_ordering(699) 00:10:43.401 fused_ordering(700) 00:10:43.401 fused_ordering(701) 00:10:43.401 fused_ordering(702) 00:10:43.401 fused_ordering(703) 00:10:43.401 fused_ordering(704) 00:10:43.401 fused_ordering(705) 00:10:43.401 fused_ordering(706) 00:10:43.401 fused_ordering(707) 00:10:43.401 fused_ordering(708) 00:10:43.401 fused_ordering(709) 00:10:43.401 fused_ordering(710) 00:10:43.401 fused_ordering(711) 00:10:43.401 fused_ordering(712) 00:10:43.401 fused_ordering(713) 00:10:43.401 fused_ordering(714) 00:10:43.401 fused_ordering(715) 00:10:43.401 fused_ordering(716) 00:10:43.401 fused_ordering(717) 00:10:43.401 fused_ordering(718) 00:10:43.401 fused_ordering(719) 00:10:43.401 fused_ordering(720) 00:10:43.401 fused_ordering(721) 00:10:43.401 fused_ordering(722) 00:10:43.401 fused_ordering(723) 00:10:43.401 fused_ordering(724) 00:10:43.401 fused_ordering(725) 00:10:43.401 fused_ordering(726) 00:10:43.401 fused_ordering(727) 00:10:43.401 fused_ordering(728) 00:10:43.401 fused_ordering(729) 00:10:43.401 fused_ordering(730) 00:10:43.401 fused_ordering(731) 00:10:43.401 fused_ordering(732) 00:10:43.401 fused_ordering(733) 00:10:43.401 fused_ordering(734) 00:10:43.401 fused_ordering(735) 00:10:43.401 fused_ordering(736) 00:10:43.401 fused_ordering(737) 00:10:43.401 fused_ordering(738) 00:10:43.401 fused_ordering(739) 00:10:43.401 fused_ordering(740) 00:10:43.401 fused_ordering(741) 00:10:43.401 fused_ordering(742) 00:10:43.401 fused_ordering(743) 00:10:43.401 fused_ordering(744) 00:10:43.401 fused_ordering(745) 00:10:43.401 fused_ordering(746) 00:10:43.401 fused_ordering(747) 00:10:43.401 fused_ordering(748) 00:10:43.401 fused_ordering(749) 00:10:43.401 fused_ordering(750) 00:10:43.401 fused_ordering(751) 00:10:43.401 fused_ordering(752) 00:10:43.401 fused_ordering(753) 00:10:43.401 fused_ordering(754) 00:10:43.401 fused_ordering(755) 00:10:43.401 fused_ordering(756) 00:10:43.401 fused_ordering(757) 00:10:43.401 fused_ordering(758) 00:10:43.401 fused_ordering(759) 00:10:43.401 fused_ordering(760) 00:10:43.401 fused_ordering(761) 00:10:43.401 fused_ordering(762) 00:10:43.401 fused_ordering(763) 00:10:43.401 fused_ordering(764) 00:10:43.401 fused_ordering(765) 00:10:43.401 fused_ordering(766) 00:10:43.401 fused_ordering(767) 00:10:43.401 fused_ordering(768) 00:10:43.401 fused_ordering(769) 00:10:43.401 fused_ordering(770) 00:10:43.401 fused_ordering(771) 00:10:43.401 fused_ordering(772) 00:10:43.401 fused_ordering(773) 00:10:43.401 fused_ordering(774) 00:10:43.401 fused_ordering(775) 00:10:43.401 fused_ordering(776) 00:10:43.401 fused_ordering(777) 00:10:43.401 fused_ordering(778) 00:10:43.401 fused_ordering(779) 00:10:43.401 fused_ordering(780) 00:10:43.401 fused_ordering(781) 00:10:43.401 fused_ordering(782) 00:10:43.401 fused_ordering(783) 00:10:43.401 fused_ordering(784) 00:10:43.401 fused_ordering(785) 00:10:43.401 fused_ordering(786) 00:10:43.401 fused_ordering(787) 00:10:43.401 fused_ordering(788) 00:10:43.401 fused_ordering(789) 00:10:43.401 fused_ordering(790) 00:10:43.401 fused_ordering(791) 00:10:43.401 fused_ordering(792) 00:10:43.401 fused_ordering(793) 00:10:43.401 fused_ordering(794) 00:10:43.401 fused_ordering(795) 00:10:43.401 fused_ordering(796) 00:10:43.401 fused_ordering(797) 00:10:43.401 fused_ordering(798) 00:10:43.401 fused_ordering(799) 00:10:43.401 fused_ordering(800) 00:10:43.401 fused_ordering(801) 00:10:43.401 fused_ordering(802) 00:10:43.401 fused_ordering(803) 00:10:43.401 fused_ordering(804) 00:10:43.401 fused_ordering(805) 00:10:43.401 fused_ordering(806) 00:10:43.401 fused_ordering(807) 00:10:43.401 fused_ordering(808) 00:10:43.401 fused_ordering(809) 00:10:43.401 fused_ordering(810) 00:10:43.401 fused_ordering(811) 00:10:43.401 fused_ordering(812) 00:10:43.401 fused_ordering(813) 00:10:43.401 fused_ordering(814) 00:10:43.401 fused_ordering(815) 00:10:43.401 fused_ordering(816) 00:10:43.401 fused_ordering(817) 00:10:43.401 fused_ordering(818) 00:10:43.401 fused_ordering(819) 00:10:43.401 fused_ordering(820) 00:10:43.972 fused_ordering(821) 00:10:43.972 fused_ordering(822) 00:10:43.972 fused_ordering(823) 00:10:43.972 fused_ordering(824) 00:10:43.972 fused_ordering(825) 00:10:43.972 fused_ordering(826) 00:10:43.972 fused_ordering(827) 00:10:43.972 fused_ordering(828) 00:10:43.972 fused_ordering(829) 00:10:43.972 fused_ordering(830) 00:10:43.972 fused_ordering(831) 00:10:43.972 fused_ordering(832) 00:10:43.972 fused_ordering(833) 00:10:43.972 fused_ordering(834) 00:10:43.972 fused_ordering(835) 00:10:43.972 fused_ordering(836) 00:10:43.972 fused_ordering(837) 00:10:43.972 fused_ordering(838) 00:10:43.972 fused_ordering(839) 00:10:43.972 fused_ordering(840) 00:10:43.972 fused_ordering(841) 00:10:43.972 fused_ordering(842) 00:10:43.972 fused_ordering(843) 00:10:43.972 fused_ordering(844) 00:10:43.972 fused_ordering(845) 00:10:43.972 fused_ordering(846) 00:10:43.972 fused_ordering(847) 00:10:43.972 fused_ordering(848) 00:10:43.972 fused_ordering(849) 00:10:43.972 fused_ordering(850) 00:10:43.972 fused_ordering(851) 00:10:43.972 fused_ordering(852) 00:10:43.972 fused_ordering(853) 00:10:43.972 fused_ordering(854) 00:10:43.972 fused_ordering(855) 00:10:43.972 fused_ordering(856) 00:10:43.972 fused_ordering(857) 00:10:43.972 fused_ordering(858) 00:10:43.972 fused_ordering(859) 00:10:43.972 fused_ordering(860) 00:10:43.972 fused_ordering(861) 00:10:43.972 fused_ordering(862) 00:10:43.972 fused_ordering(863) 00:10:43.972 fused_ordering(864) 00:10:43.972 fused_ordering(865) 00:10:43.972 fused_ordering(866) 00:10:43.972 fused_ordering(867) 00:10:43.972 fused_ordering(868) 00:10:43.972 fused_ordering(869) 00:10:43.972 fused_ordering(870) 00:10:43.972 fused_ordering(871) 00:10:43.972 fused_ordering(872) 00:10:43.972 fused_ordering(873) 00:10:43.972 fused_ordering(874) 00:10:43.972 fused_ordering(875) 00:10:43.972 fused_ordering(876) 00:10:43.972 fused_ordering(877) 00:10:43.972 fused_ordering(878) 00:10:43.972 fused_ordering(879) 00:10:43.972 fused_ordering(880) 00:10:43.972 fused_ordering(881) 00:10:43.972 fused_ordering(882) 00:10:43.972 fused_ordering(883) 00:10:43.972 fused_ordering(884) 00:10:43.972 fused_ordering(885) 00:10:43.972 fused_ordering(886) 00:10:43.972 fused_ordering(887) 00:10:43.972 fused_ordering(888) 00:10:43.972 fused_ordering(889) 00:10:43.972 fused_ordering(890) 00:10:43.972 fused_ordering(891) 00:10:43.972 fused_ordering(892) 00:10:43.972 fused_ordering(893) 00:10:43.972 fused_ordering(894) 00:10:43.972 fused_ordering(895) 00:10:43.972 fused_ordering(896) 00:10:43.972 fused_ordering(897) 00:10:43.972 fused_ordering(898) 00:10:43.972 fused_ordering(899) 00:10:43.972 fused_ordering(900) 00:10:43.972 fused_ordering(901) 00:10:43.972 fused_ordering(902) 00:10:43.972 fused_ordering(903) 00:10:43.972 fused_ordering(904) 00:10:43.972 fused_ordering(905) 00:10:43.972 fused_ordering(906) 00:10:43.972 fused_ordering(907) 00:10:43.972 fused_ordering(908) 00:10:43.972 fused_ordering(909) 00:10:43.972 fused_ordering(910) 00:10:43.972 fused_ordering(911) 00:10:43.972 fused_ordering(912) 00:10:43.972 fused_ordering(913) 00:10:43.972 fused_ordering(914) 00:10:43.972 fused_ordering(915) 00:10:43.972 fused_ordering(916) 00:10:43.972 fused_ordering(917) 00:10:43.972 fused_ordering(918) 00:10:43.972 fused_ordering(919) 00:10:43.972 fused_ordering(920) 00:10:43.972 fused_ordering(921) 00:10:43.972 fused_ordering(922) 00:10:43.972 fused_ordering(923) 00:10:43.972 fused_ordering(924) 00:10:43.972 fused_ordering(925) 00:10:43.972 fused_ordering(926) 00:10:43.972 fused_ordering(927) 00:10:43.972 fused_ordering(928) 00:10:43.972 fused_ordering(929) 00:10:43.972 fused_ordering(930) 00:10:43.972 fused_ordering(931) 00:10:43.972 fused_ordering(932) 00:10:43.972 fused_ordering(933) 00:10:43.972 fused_ordering(934) 00:10:43.972 fused_ordering(935) 00:10:43.972 fused_ordering(936) 00:10:43.972 fused_ordering(937) 00:10:43.972 fused_ordering(938) 00:10:43.972 fused_ordering(939) 00:10:43.972 fused_ordering(940) 00:10:43.972 fused_ordering(941) 00:10:43.972 fused_ordering(942) 00:10:43.972 fused_ordering(943) 00:10:43.972 fused_ordering(944) 00:10:43.972 fused_ordering(945) 00:10:43.972 fused_ordering(946) 00:10:43.972 fused_ordering(947) 00:10:43.972 fused_ordering(948) 00:10:43.972 fused_ordering(949) 00:10:43.972 fused_ordering(950) 00:10:43.972 fused_ordering(951) 00:10:43.972 fused_ordering(952) 00:10:43.972 fused_ordering(953) 00:10:43.972 fused_ordering(954) 00:10:43.972 fused_ordering(955) 00:10:43.972 fused_ordering(956) 00:10:43.972 fused_ordering(957) 00:10:43.972 fused_ordering(958) 00:10:43.972 fused_ordering(959) 00:10:43.972 fused_ordering(960) 00:10:43.972 fused_ordering(961) 00:10:43.972 fused_ordering(962) 00:10:43.972 fused_ordering(963) 00:10:43.972 fused_ordering(964) 00:10:43.972 fused_ordering(965) 00:10:43.972 fused_ordering(966) 00:10:43.972 fused_ordering(967) 00:10:43.972 fused_ordering(968) 00:10:43.972 fused_ordering(969) 00:10:43.972 fused_ordering(970) 00:10:43.972 fused_ordering(971) 00:10:43.972 fused_ordering(972) 00:10:43.972 fused_ordering(973) 00:10:43.972 fused_ordering(974) 00:10:43.972 fused_ordering(975) 00:10:43.972 fused_ordering(976) 00:10:43.972 fused_ordering(977) 00:10:43.972 fused_ordering(978) 00:10:43.972 fused_ordering(979) 00:10:43.972 fused_ordering(980) 00:10:43.972 fused_ordering(981) 00:10:43.972 fused_ordering(982) 00:10:43.972 fused_ordering(983) 00:10:43.972 fused_ordering(984) 00:10:43.972 fused_ordering(985) 00:10:43.972 fused_ordering(986) 00:10:43.972 fused_ordering(987) 00:10:43.972 fused_ordering(988) 00:10:43.972 fused_ordering(989) 00:10:43.972 fused_ordering(990) 00:10:43.972 fused_ordering(991) 00:10:43.972 fused_ordering(992) 00:10:43.972 fused_ordering(993) 00:10:43.972 fused_ordering(994) 00:10:43.972 fused_ordering(995) 00:10:43.972 fused_ordering(996) 00:10:43.972 fused_ordering(997) 00:10:43.972 fused_ordering(998) 00:10:43.972 fused_ordering(999) 00:10:43.972 fused_ordering(1000) 00:10:43.972 fused_ordering(1001) 00:10:43.972 fused_ordering(1002) 00:10:43.972 fused_ordering(1003) 00:10:43.972 fused_ordering(1004) 00:10:43.972 fused_ordering(1005) 00:10:43.972 fused_ordering(1006) 00:10:43.972 fused_ordering(1007) 00:10:43.972 fused_ordering(1008) 00:10:43.972 fused_ordering(1009) 00:10:43.972 fused_ordering(1010) 00:10:43.972 fused_ordering(1011) 00:10:43.972 fused_ordering(1012) 00:10:43.972 fused_ordering(1013) 00:10:43.972 fused_ordering(1014) 00:10:43.972 fused_ordering(1015) 00:10:43.972 fused_ordering(1016) 00:10:43.972 fused_ordering(1017) 00:10:43.972 fused_ordering(1018) 00:10:43.972 fused_ordering(1019) 00:10:43.972 fused_ordering(1020) 00:10:43.972 fused_ordering(1021) 00:10:43.972 fused_ordering(1022) 00:10:43.972 fused_ordering(1023) 00:10:43.972 16:55:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:43.972 16:55:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:43.972 16:55:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:43.972 16:55:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:43.972 16:55:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:43.972 16:55:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:43.972 16:55:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:43.972 16:55:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:43.972 rmmod nvme_tcp 00:10:43.972 rmmod nvme_fabrics 00:10:44.236 rmmod nvme_keyring 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1347831 ']' 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1347831 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 1347831 ']' 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 1347831 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1347831 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1347831' 00:10:44.236 killing process with pid 1347831 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 1347831 00:10:44.236 [2024-05-15 16:55:22.908168] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:44.236 16:55:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 1347831 00:10:44.236 16:55:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:44.236 16:55:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:44.236 16:55:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:44.236 16:55:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.236 16:55:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:44.236 16:55:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.236 16:55:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.236 16:55:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.782 16:55:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:46.782 00:10:46.782 real 0m12.950s 00:10:46.782 user 0m7.080s 00:10:46.782 sys 0m6.621s 00:10:46.782 16:55:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:46.782 16:55:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:46.782 ************************************ 00:10:46.782 END TEST nvmf_fused_ordering 00:10:46.782 ************************************ 00:10:46.782 16:55:25 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:46.782 16:55:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:46.782 16:55:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:46.782 16:55:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.782 ************************************ 00:10:46.782 START TEST nvmf_delete_subsystem 00:10:46.782 ************************************ 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:46.782 * Looking for test storage... 00:10:46.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:46.782 16:55:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:53.437 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:53.437 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:53.437 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:53.438 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:53.438 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.438 16:55:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.438 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.438 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.438 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:53.438 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.438 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.438 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.438 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:53.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:10:53.438 00:10:53.438 --- 10.0.0.2 ping statistics --- 00:10:53.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.438 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:10:53.699 00:10:53.699 --- 10.0.0.1 ping statistics --- 00:10:53.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.699 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1352543 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1352543 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 1352543 ']' 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:53.699 16:55:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.699 [2024-05-15 16:55:32.352222] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:10:53.699 [2024-05-15 16:55:32.352275] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.699 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.699 [2024-05-15 16:55:32.412440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:53.699 [2024-05-15 16:55:32.479292] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.699 [2024-05-15 16:55:32.479325] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.699 [2024-05-15 16:55:32.479333] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.699 [2024-05-15 16:55:32.479339] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.699 [2024-05-15 16:55:32.479345] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.699 [2024-05-15 16:55:32.479480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.699 [2024-05-15 16:55:32.479482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.641 [2024-05-15 16:55:33.183018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.641 [2024-05-15 16:55:33.207031] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:54.641 [2024-05-15 16:55:33.207213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:54.641 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.642 NULL1 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.642 Delay0 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1352825 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:54.642 16:55:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:54.642 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.642 [2024-05-15 16:55:33.304064] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:56.556 16:55:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.556 16:55:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.556 16:55:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Read completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 Write completed with error (sct=0, sc=8) 00:10:56.556 starting I/O failed: -6 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 Read completed with error (sct=0, sc=8) 00:10:56.557 Write completed with error (sct=0, sc=8) 00:10:56.557 starting I/O failed: -6 00:10:56.557 starting I/O failed: -6 00:10:56.557 starting I/O failed: -6 00:10:56.557 starting I/O failed: -6 00:10:56.557 starting I/O failed: -6 00:10:56.557 starting I/O failed: -6 00:10:56.557 starting I/O failed: -6 00:10:56.557 starting I/O failed: -6 00:10:56.557 starting I/O failed: -6 00:10:56.557 starting I/O failed: -6 00:10:56.557 starting I/O failed: -6 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 starting I/O failed: -6 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 starting I/O failed: -6 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 starting I/O failed: -6 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 starting I/O failed: -6 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 starting I/O failed: -6 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 starting I/O failed: -6 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 starting I/O failed: -6 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 starting I/O failed: -6 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 starting I/O failed: -6 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 starting I/O failed: -6 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 [2024-05-15 16:55:35.392874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd7d4000c00 is same with the state(5) to be set 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Write completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.818 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Write completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:56.819 Write completed with error (sct=0, sc=8) 00:10:56.819 Read completed with error (sct=0, sc=8) 00:10:57.762 [2024-05-15 16:55:36.359350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd9060 is same with the state(5) to be set 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 [2024-05-15 16:55:36.391724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce1c20 is same with the state(5) to be set 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 [2024-05-15 16:55:36.391901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdaf10 is same with the state(5) to be set 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 [2024-05-15 16:55:36.394456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd7d400bfe0 is same with the state(5) to be set 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Write completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 Read completed with error (sct=0, sc=8) 00:10:57.762 [2024-05-15 16:55:36.394613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd7d400c780 is same with the state(5) to be set 00:10:57.762 Initializing NVMe Controllers 00:10:57.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:57.762 Controller IO queue size 128, less than required. 00:10:57.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:57.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:57.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:57.763 Initialization complete. Launching workers. 00:10:57.763 ======================================================== 00:10:57.763 Latency(us) 00:10:57.763 Device Information : IOPS MiB/s Average min max 00:10:57.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 182.36 0.09 915558.20 275.61 1007094.95 00:10:57.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.44 0.08 941904.33 307.35 2000982.30 00:10:57.763 ======================================================== 00:10:57.763 Total : 340.80 0.17 927806.84 275.61 2000982.30 00:10:57.763 00:10:57.763 [2024-05-15 16:55:36.395152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd9060 (9): Bad file descriptor 00:10:57.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:57.763 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.763 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:57.763 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1352825 00:10:57.763 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1352825 00:10:58.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1352825) - No such process 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1352825 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1352825 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1352825 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.335 [2024-05-15 16:55:36.927819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1353494 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1353494 00:10:58.335 16:55:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:58.335 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.335 [2024-05-15 16:55:36.994298] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:58.645 16:55:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:58.645 16:55:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1353494 00:10:58.645 16:55:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:59.216 16:55:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:59.216 16:55:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1353494 00:10:59.216 16:55:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:59.788 16:55:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:59.788 16:55:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1353494 00:10:59.788 16:55:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:00.359 16:55:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:00.359 16:55:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1353494 00:11:00.359 16:55:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:00.930 16:55:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:00.930 16:55:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1353494 00:11:00.930 16:55:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:01.191 16:55:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:01.191 16:55:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1353494 00:11:01.191 16:55:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:01.451 Initializing NVMe Controllers 00:11:01.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:01.451 Controller IO queue size 128, less than required. 00:11:01.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:01.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:01.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:01.451 Initialization complete. Launching workers. 00:11:01.451 ======================================================== 00:11:01.451 Latency(us) 00:11:01.451 Device Information : IOPS MiB/s Average min max 00:11:01.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002059.64 1000178.56 1007173.93 00:11:01.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002840.61 1000308.35 1009221.88 00:11:01.451 ======================================================== 00:11:01.451 Total : 256.00 0.12 1002450.12 1000178.56 1009221.88 00:11:01.451 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1353494 00:11:01.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1353494) - No such process 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1353494 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:01.712 rmmod nvme_tcp 00:11:01.712 rmmod nvme_fabrics 00:11:01.712 rmmod nvme_keyring 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1352543 ']' 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1352543 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 1352543 ']' 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 1352543 00:11:01.712 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1352543 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1352543' 00:11:01.972 killing process with pid 1352543 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 1352543 00:11:01.972 [2024-05-15 16:55:40.573823] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 1352543 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.972 16:55:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.515 16:55:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:04.515 00:11:04.515 real 0m17.594s 00:11:04.515 user 0m30.605s 00:11:04.515 sys 0m5.920s 00:11:04.515 16:55:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:04.515 16:55:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:04.515 ************************************ 00:11:04.515 END TEST nvmf_delete_subsystem 00:11:04.515 ************************************ 00:11:04.515 16:55:42 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:04.515 16:55:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:04.515 16:55:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.515 16:55:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:04.515 ************************************ 00:11:04.515 START TEST nvmf_ns_masking 00:11:04.515 ************************************ 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:04.515 * Looking for test storage... 00:11:04.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=bd1ef4d6-0438-4d1a-a1f2-bb627c6bbaed 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:04.515 16:55:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:11.105 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:11.105 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:11.105 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.105 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:11.106 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:11.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:11:11.106 00:11:11.106 --- 10.0.0.2 ping statistics --- 00:11:11.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.106 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:11:11.106 00:11:11.106 --- 10.0.0.1 ping statistics --- 00:11:11.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.106 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1358367 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1358367 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 1358367 ']' 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:11.106 16:55:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:11.367 [2024-05-15 16:55:49.956493] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:11:11.367 [2024-05-15 16:55:49.956541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.367 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.367 [2024-05-15 16:55:50.021766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.367 [2024-05-15 16:55:50.089733] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.367 [2024-05-15 16:55:50.089771] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.367 [2024-05-15 16:55:50.089779] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.367 [2024-05-15 16:55:50.089785] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.367 [2024-05-15 16:55:50.089791] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.367 [2024-05-15 16:55:50.089964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.367 [2024-05-15 16:55:50.090078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.367 [2024-05-15 16:55:50.090234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.367 [2024-05-15 16:55:50.090235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.938 16:55:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:11.938 16:55:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:11:11.938 16:55:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:11.938 16:55:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.938 16:55:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:11.938 16:55:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.938 16:55:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:12.199 [2024-05-15 16:55:50.906557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.199 16:55:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:12.199 16:55:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:12.199 16:55:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:12.460 Malloc1 00:11:12.460 16:55:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:12.460 Malloc2 00:11:12.460 16:55:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:12.722 16:55:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:12.982 16:55:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.982 [2024-05-15 16:55:51.767231] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:12.982 [2024-05-15 16:55:51.767482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.982 16:55:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:12.982 16:55:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bd1ef4d6-0438-4d1a-a1f2-bb627c6bbaed -a 10.0.0.2 -s 4420 -i 4 00:11:13.244 16:55:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.244 16:55:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:13.244 16:55:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.244 16:55:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:13.244 16:55:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:15.789 [ 0]:0x1 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=82af050146854e0c8d65a0431477568b 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 82af050146854e0c8d65a0431477568b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:15.789 [ 0]:0x1 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=82af050146854e0c8d65a0431477568b 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 82af050146854e0c8d65a0431477568b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:15.789 [ 1]:0x2 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4df875aef1ea41999a776323ea681187 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4df875aef1ea41999a776323ea681187 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.789 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.050 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:16.050 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:11:16.051 16:55:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bd1ef4d6-0438-4d1a-a1f2-bb627c6bbaed -a 10.0.0.2 -s 4420 -i 4 00:11:16.312 16:55:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:16.312 16:55:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:16.312 16:55:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.312 16:55:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:11:16.312 16:55:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:11:16.312 16:55:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:18.225 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:18.225 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:18.225 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.225 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:18.225 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.225 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:18.225 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:18.225 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:18.486 [ 0]:0x2 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4df875aef1ea41999a776323ea681187 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4df875aef1ea41999a776323ea681187 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.486 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:18.746 [ 0]:0x1 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=82af050146854e0c8d65a0431477568b 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 82af050146854e0c8d65a0431477568b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:18.746 [ 1]:0x2 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4df875aef1ea41999a776323ea681187 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4df875aef1ea41999a776323ea681187 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.746 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:19.007 [ 0]:0x2 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4df875aef1ea41999a776323ea681187 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4df875aef1ea41999a776323ea681187 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:11:19.007 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:19.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.267 16:55:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:19.267 16:55:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:11:19.267 16:55:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bd1ef4d6-0438-4d1a-a1f2-bb627c6bbaed -a 10.0.0.2 -s 4420 -i 4 00:11:19.527 16:55:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:19.527 16:55:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:19.527 16:55:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.527 16:55:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:19.527 16:55:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:19.527 16:55:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:21.446 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:21.446 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:21.446 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.446 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:21.446 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.446 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:21.446 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:21.446 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:21.707 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:21.707 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:21.708 [ 0]:0x1 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=82af050146854e0c8d65a0431477568b 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 82af050146854e0c8d65a0431477568b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:21.708 [ 1]:0x2 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4df875aef1ea41999a776323ea681187 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4df875aef1ea41999a776323ea681187 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.708 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:21.967 [ 0]:0x2 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4df875aef1ea41999a776323ea681187 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4df875aef1ea41999a776323ea681187 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:21.967 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:22.227 [2024-05-15 16:56:00.827484] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:22.227 request: 00:11:22.227 { 00:11:22.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.227 "nsid": 2, 00:11:22.227 "host": "nqn.2016-06.io.spdk:host1", 00:11:22.227 "method": "nvmf_ns_remove_host", 00:11:22.227 "req_id": 1 00:11:22.227 } 00:11:22.227 Got JSON-RPC error response 00:11:22.227 response: 00:11:22.227 { 00:11:22.227 "code": -32602, 00:11:22.227 "message": "Invalid parameters" 00:11:22.227 } 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:22.227 [ 0]:0x2 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4df875aef1ea41999a776323ea681187 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4df875aef1ea41999a776323ea681187 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:11:22.227 16:56:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.227 16:56:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:22.487 rmmod nvme_tcp 00:11:22.487 rmmod nvme_fabrics 00:11:22.487 rmmod nvme_keyring 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1358367 ']' 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1358367 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 1358367 ']' 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 1358367 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1358367 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1358367' 00:11:22.487 killing process with pid 1358367 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 1358367 00:11:22.487 [2024-05-15 16:56:01.292410] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:22.487 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 1358367 00:11:22.750 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:22.750 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:22.750 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:22.750 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:22.750 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:22.750 16:56:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.750 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:22.750 16:56:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.736 16:56:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:24.736 00:11:24.736 real 0m20.698s 00:11:24.736 user 0m49.814s 00:11:24.736 sys 0m6.673s 00:11:24.736 16:56:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:24.736 16:56:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:24.736 ************************************ 00:11:24.736 END TEST nvmf_ns_masking 00:11:24.736 ************************************ 00:11:24.736 16:56:03 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:24.736 16:56:03 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:24.736 16:56:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:24.736 16:56:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:24.736 16:56:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:24.736 ************************************ 00:11:24.736 START TEST nvmf_nvme_cli 00:11:24.736 ************************************ 00:11:24.736 16:56:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:24.996 * Looking for test storage... 00:11:24.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.996 16:56:03 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:24.997 16:56:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:31.579 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:31.579 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:31.579 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:31.579 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.579 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:11:31.840 00:11:31.840 --- 10.0.0.2 ping statistics --- 00:11:31.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.840 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:11:31.840 00:11:31.840 --- 10.0.0.1 ping statistics --- 00:11:31.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.840 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.840 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1364868 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1364868 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 1364868 ']' 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:32.100 16:56:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:32.100 [2024-05-15 16:56:10.761525] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:11:32.100 [2024-05-15 16:56:10.761599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.100 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.100 [2024-05-15 16:56:10.836209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.100 [2024-05-15 16:56:10.910760] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.100 [2024-05-15 16:56:10.910799] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.100 [2024-05-15 16:56:10.910806] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.100 [2024-05-15 16:56:10.910813] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.100 [2024-05-15 16:56:10.910819] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.100 [2024-05-15 16:56:10.910953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.100 [2024-05-15 16:56:10.911071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.100 [2024-05-15 16:56:10.911229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.100 [2024-05-15 16:56:10.911230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.043 [2024-05-15 16:56:11.582101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.043 Malloc0 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.043 Malloc1 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.043 [2024-05-15 16:56:11.667607] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:33.043 [2024-05-15 16:56:11.667843] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:33.043 00:11:33.043 Discovery Log Number of Records 2, Generation counter 2 00:11:33.043 =====Discovery Log Entry 0====== 00:11:33.043 trtype: tcp 00:11:33.043 adrfam: ipv4 00:11:33.043 subtype: current discovery subsystem 00:11:33.043 treq: not required 00:11:33.043 portid: 0 00:11:33.043 trsvcid: 4420 00:11:33.043 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:33.043 traddr: 10.0.0.2 00:11:33.043 eflags: explicit discovery connections, duplicate discovery information 00:11:33.043 sectype: none 00:11:33.043 =====Discovery Log Entry 1====== 00:11:33.043 trtype: tcp 00:11:33.043 adrfam: ipv4 00:11:33.043 subtype: nvme subsystem 00:11:33.043 treq: not required 00:11:33.043 portid: 0 00:11:33.043 trsvcid: 4420 00:11:33.043 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:33.043 traddr: 10.0.0.2 00:11:33.043 eflags: none 00:11:33.043 sectype: none 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:33.043 16:56:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.953 16:56:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:34.953 16:56:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:11:34.953 16:56:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.953 16:56:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:34.953 16:56:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:34.953 16:56:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:36.867 /dev/nvme0n1 ]] 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:36.867 16:56:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.127 16:56:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.127 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:11:37.127 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:37.127 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.127 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:37.127 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:37.128 16:56:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:37.128 rmmod nvme_tcp 00:11:37.387 rmmod nvme_fabrics 00:11:37.387 rmmod nvme_keyring 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1364868 ']' 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1364868 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 1364868 ']' 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 1364868 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1364868 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1364868' 00:11:37.387 killing process with pid 1364868 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 1364868 00:11:37.387 [2024-05-15 16:56:16.070446] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:37.387 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 1364868 00:11:37.647 16:56:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:37.647 16:56:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:37.647 16:56:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:37.647 16:56:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:37.647 16:56:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:37.647 16:56:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.647 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:37.647 16:56:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.558 16:56:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:39.558 00:11:39.558 real 0m14.726s 00:11:39.558 user 0m23.258s 00:11:39.558 sys 0m5.767s 00:11:39.558 16:56:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:39.558 16:56:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:39.558 ************************************ 00:11:39.558 END TEST nvmf_nvme_cli 00:11:39.558 ************************************ 00:11:39.558 16:56:18 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:39.558 16:56:18 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:39.558 16:56:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:39.558 16:56:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:39.558 16:56:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.558 ************************************ 00:11:39.558 START TEST nvmf_vfio_user 00:11:39.558 ************************************ 00:11:39.558 16:56:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:39.819 * Looking for test storage... 00:11:39.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.819 16:56:18 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1366352 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1366352' 00:11:39.820 Process pid: 1366352 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1366352 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1366352 ']' 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 16:56:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:39.820 [2024-05-15 16:56:18.524736] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:11:39.820 [2024-05-15 16:56:18.524794] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.820 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.820 [2024-05-15 16:56:18.586355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.081 [2024-05-15 16:56:18.655059] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.081 [2024-05-15 16:56:18.655094] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.081 [2024-05-15 16:56:18.655101] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.081 [2024-05-15 16:56:18.655108] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.081 [2024-05-15 16:56:18.655114] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.081 [2024-05-15 16:56:18.655251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.081 [2024-05-15 16:56:18.655416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.081 [2024-05-15 16:56:18.655590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.081 [2024-05-15 16:56:18.655597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.654 16:56:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:40.654 16:56:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:11:40.654 16:56:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:41.596 16:56:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:41.857 16:56:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:41.857 16:56:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:41.857 16:56:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:41.857 16:56:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:41.857 16:56:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:41.857 Malloc1 00:11:41.857 16:56:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:42.118 16:56:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:42.379 16:56:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:42.380 [2024-05-15 16:56:21.137662] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:42.380 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:42.380 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:42.380 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:42.641 Malloc2 00:11:42.641 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:42.902 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:42.902 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:43.166 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:43.166 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:43.166 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:43.166 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:43.166 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:43.166 16:56:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:43.166 [2024-05-15 16:56:21.882106] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:11:43.166 [2024-05-15 16:56:21.882173] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1367040 ] 00:11:43.166 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.166 [2024-05-15 16:56:21.914172] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:43.166 [2024-05-15 16:56:21.922878] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:43.166 [2024-05-15 16:56:21.922898] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd0e6b61000 00:11:43.166 [2024-05-15 16:56:21.923870] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:43.166 [2024-05-15 16:56:21.924871] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:43.166 [2024-05-15 16:56:21.925871] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:43.166 [2024-05-15 16:56:21.926879] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:43.166 [2024-05-15 16:56:21.927882] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:43.166 [2024-05-15 16:56:21.928896] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:43.166 [2024-05-15 16:56:21.929901] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:43.166 [2024-05-15 16:56:21.930907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:43.166 [2024-05-15 16:56:21.931914] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:43.166 [2024-05-15 16:56:21.931929] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd0e6b56000 00:11:43.166 [2024-05-15 16:56:21.933261] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:43.166 [2024-05-15 16:56:21.950165] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:43.166 [2024-05-15 16:56:21.950194] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:43.166 [2024-05-15 16:56:21.955040] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:43.167 [2024-05-15 16:56:21.955092] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:43.167 [2024-05-15 16:56:21.955182] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:43.167 [2024-05-15 16:56:21.955198] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:43.167 [2024-05-15 16:56:21.955204] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:43.167 [2024-05-15 16:56:21.956034] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:43.167 [2024-05-15 16:56:21.956043] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:43.167 [2024-05-15 16:56:21.956050] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:43.167 [2024-05-15 16:56:21.957042] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:43.167 [2024-05-15 16:56:21.957050] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:43.167 [2024-05-15 16:56:21.957057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:43.167 [2024-05-15 16:56:21.958047] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:43.167 [2024-05-15 16:56:21.958055] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:43.167 [2024-05-15 16:56:21.959055] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:43.167 [2024-05-15 16:56:21.959063] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:43.167 [2024-05-15 16:56:21.959069] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:43.167 [2024-05-15 16:56:21.959075] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:43.167 [2024-05-15 16:56:21.959181] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:43.167 [2024-05-15 16:56:21.959185] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:43.167 [2024-05-15 16:56:21.959190] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:43.167 [2024-05-15 16:56:21.960072] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:43.167 [2024-05-15 16:56:21.961063] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:43.167 [2024-05-15 16:56:21.962070] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:43.167 [2024-05-15 16:56:21.963072] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:43.167 [2024-05-15 16:56:21.963127] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:43.167 [2024-05-15 16:56:21.964089] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:43.167 [2024-05-15 16:56:21.964096] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:43.167 [2024-05-15 16:56:21.964101] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:43.167 [2024-05-15 16:56:21.964123] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:43.167 [2024-05-15 16:56:21.964131] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:43.167 [2024-05-15 16:56:21.964148] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:43.167 [2024-05-15 16:56:21.964154] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:43.167 [2024-05-15 16:56:21.964169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:43.167 [2024-05-15 16:56:21.964197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:43.167 [2024-05-15 16:56:21.964206] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:43.167 [2024-05-15 16:56:21.964211] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:43.167 [2024-05-15 16:56:21.964216] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:43.167 [2024-05-15 16:56:21.964220] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:43.167 [2024-05-15 16:56:21.964225] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:43.167 [2024-05-15 16:56:21.964229] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:43.167 [2024-05-15 16:56:21.964234] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:43.167 [2024-05-15 16:56:21.964244] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:43.167 [2024-05-15 16:56:21.964256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:43.167 [2024-05-15 16:56:21.964269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:43.167 [2024-05-15 16:56:21.964284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.167 [2024-05-15 16:56:21.964292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.167 [2024-05-15 16:56:21.964301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.167 [2024-05-15 16:56:21.964311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.167 [2024-05-15 16:56:21.964315] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:43.167 [2024-05-15 16:56:21.964322] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:43.167 [2024-05-15 16:56:21.964331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:43.167 [2024-05-15 16:56:21.964338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:43.167 [2024-05-15 16:56:21.964343] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:43.167 [2024-05-15 16:56:21.964350] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:43.167 [2024-05-15 16:56:21.964357] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:43.167 [2024-05-15 16:56:21.964363] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:43.167 [2024-05-15 16:56:21.964372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:43.167 [2024-05-15 16:56:21.964383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:43.167 [2024-05-15 16:56:21.964432] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964440] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964447] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:43.168 [2024-05-15 16:56:21.964452] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:43.168 [2024-05-15 16:56:21.964458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:43.168 [2024-05-15 16:56:21.964473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:43.168 [2024-05-15 16:56:21.964484] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:43.168 [2024-05-15 16:56:21.964497] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964505] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964511] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:43.168 [2024-05-15 16:56:21.964516] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:43.168 [2024-05-15 16:56:21.964522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:43.168 [2024-05-15 16:56:21.964536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:43.168 [2024-05-15 16:56:21.964552] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964562] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964569] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:43.168 [2024-05-15 16:56:21.964573] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:43.168 [2024-05-15 16:56:21.964579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:43.168 [2024-05-15 16:56:21.964590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:43.168 [2024-05-15 16:56:21.964600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964607] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964614] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964620] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964626] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964631] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:43.168 [2024-05-15 16:56:21.964635] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:43.168 [2024-05-15 16:56:21.964640] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:43.168 [2024-05-15 16:56:21.964661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:43.168 [2024-05-15 16:56:21.964671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:43.168 [2024-05-15 16:56:21.964683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:43.168 [2024-05-15 16:56:21.964691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:43.168 [2024-05-15 16:56:21.964702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:43.168 [2024-05-15 16:56:21.964713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:43.168 [2024-05-15 16:56:21.964723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:43.168 [2024-05-15 16:56:21.964734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:43.168 [2024-05-15 16:56:21.964745] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:43.168 [2024-05-15 16:56:21.964749] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:43.168 [2024-05-15 16:56:21.964752] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:43.168 [2024-05-15 16:56:21.964756] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:43.168 [2024-05-15 16:56:21.964762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:43.168 [2024-05-15 16:56:21.964770] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:43.168 [2024-05-15 16:56:21.964776] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:43.168 [2024-05-15 16:56:21.964781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:43.168 [2024-05-15 16:56:21.964789] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:43.168 [2024-05-15 16:56:21.964793] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:43.168 [2024-05-15 16:56:21.964799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:43.168 [2024-05-15 16:56:21.964808] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:43.168 [2024-05-15 16:56:21.964812] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:43.168 [2024-05-15 16:56:21.964818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:43.168 [2024-05-15 16:56:21.964825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:43.168 [2024-05-15 16:56:21.964836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:43.168 [2024-05-15 16:56:21.964847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:43.168 [2024-05-15 16:56:21.964856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:43.168 ===================================================== 00:11:43.168 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:43.168 ===================================================== 00:11:43.168 Controller Capabilities/Features 00:11:43.168 ================================ 00:11:43.168 Vendor ID: 4e58 00:11:43.168 Subsystem Vendor ID: 4e58 00:11:43.168 Serial Number: SPDK1 00:11:43.168 Model Number: SPDK bdev Controller 00:11:43.168 Firmware Version: 24.05 00:11:43.168 Recommended Arb Burst: 6 00:11:43.168 IEEE OUI Identifier: 8d 6b 50 00:11:43.168 Multi-path I/O 00:11:43.168 May have multiple subsystem ports: Yes 00:11:43.169 May have multiple controllers: Yes 00:11:43.169 Associated with SR-IOV VF: No 00:11:43.169 Max Data Transfer Size: 131072 00:11:43.169 Max Number of Namespaces: 32 00:11:43.169 Max Number of I/O Queues: 127 00:11:43.169 NVMe Specification Version (VS): 1.3 00:11:43.169 NVMe Specification Version (Identify): 1.3 00:11:43.169 Maximum Queue Entries: 256 00:11:43.169 Contiguous Queues Required: Yes 00:11:43.169 Arbitration Mechanisms Supported 00:11:43.169 Weighted Round Robin: Not Supported 00:11:43.169 Vendor Specific: Not Supported 00:11:43.169 Reset Timeout: 15000 ms 00:11:43.169 Doorbell Stride: 4 bytes 00:11:43.169 NVM Subsystem Reset: Not Supported 00:11:43.169 Command Sets Supported 00:11:43.169 NVM Command Set: Supported 00:11:43.169 Boot Partition: Not Supported 00:11:43.169 Memory Page Size Minimum: 4096 bytes 00:11:43.169 Memory Page Size Maximum: 4096 bytes 00:11:43.169 Persistent Memory Region: Not Supported 00:11:43.169 Optional Asynchronous Events Supported 00:11:43.169 Namespace Attribute Notices: Supported 00:11:43.169 Firmware Activation Notices: Not Supported 00:11:43.169 ANA Change Notices: Not Supported 00:11:43.169 PLE Aggregate Log Change Notices: Not Supported 00:11:43.169 LBA Status Info Alert Notices: Not Supported 00:11:43.169 EGE Aggregate Log Change Notices: Not Supported 00:11:43.169 Normal NVM Subsystem Shutdown event: Not Supported 00:11:43.169 Zone Descriptor Change Notices: Not Supported 00:11:43.169 Discovery Log Change Notices: Not Supported 00:11:43.169 Controller Attributes 00:11:43.169 128-bit Host Identifier: Supported 00:11:43.169 Non-Operational Permissive Mode: Not Supported 00:11:43.169 NVM Sets: Not Supported 00:11:43.169 Read Recovery Levels: Not Supported 00:11:43.169 Endurance Groups: Not Supported 00:11:43.169 Predictable Latency Mode: Not Supported 00:11:43.169 Traffic Based Keep ALive: Not Supported 00:11:43.169 Namespace Granularity: Not Supported 00:11:43.169 SQ Associations: Not Supported 00:11:43.169 UUID List: Not Supported 00:11:43.169 Multi-Domain Subsystem: Not Supported 00:11:43.169 Fixed Capacity Management: Not Supported 00:11:43.169 Variable Capacity Management: Not Supported 00:11:43.169 Delete Endurance Group: Not Supported 00:11:43.169 Delete NVM Set: Not Supported 00:11:43.169 Extended LBA Formats Supported: Not Supported 00:11:43.169 Flexible Data Placement Supported: Not Supported 00:11:43.169 00:11:43.169 Controller Memory Buffer Support 00:11:43.169 ================================ 00:11:43.169 Supported: No 00:11:43.169 00:11:43.169 Persistent Memory Region Support 00:11:43.169 ================================ 00:11:43.169 Supported: No 00:11:43.169 00:11:43.169 Admin Command Set Attributes 00:11:43.169 ============================ 00:11:43.169 Security Send/Receive: Not Supported 00:11:43.169 Format NVM: Not Supported 00:11:43.169 Firmware Activate/Download: Not Supported 00:11:43.169 Namespace Management: Not Supported 00:11:43.169 Device Self-Test: Not Supported 00:11:43.169 Directives: Not Supported 00:11:43.169 NVMe-MI: Not Supported 00:11:43.169 Virtualization Management: Not Supported 00:11:43.169 Doorbell Buffer Config: Not Supported 00:11:43.169 Get LBA Status Capability: Not Supported 00:11:43.169 Command & Feature Lockdown Capability: Not Supported 00:11:43.169 Abort Command Limit: 4 00:11:43.169 Async Event Request Limit: 4 00:11:43.169 Number of Firmware Slots: N/A 00:11:43.169 Firmware Slot 1 Read-Only: N/A 00:11:43.169 Firmware Activation Without Reset: N/A 00:11:43.169 Multiple Update Detection Support: N/A 00:11:43.169 Firmware Update Granularity: No Information Provided 00:11:43.169 Per-Namespace SMART Log: No 00:11:43.169 Asymmetric Namespace Access Log Page: Not Supported 00:11:43.169 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:43.169 Command Effects Log Page: Supported 00:11:43.169 Get Log Page Extended Data: Supported 00:11:43.169 Telemetry Log Pages: Not Supported 00:11:43.169 Persistent Event Log Pages: Not Supported 00:11:43.169 Supported Log Pages Log Page: May Support 00:11:43.169 Commands Supported & Effects Log Page: Not Supported 00:11:43.169 Feature Identifiers & Effects Log Page:May Support 00:11:43.169 NVMe-MI Commands & Effects Log Page: May Support 00:11:43.169 Data Area 4 for Telemetry Log: Not Supported 00:11:43.169 Error Log Page Entries Supported: 128 00:11:43.169 Keep Alive: Supported 00:11:43.169 Keep Alive Granularity: 10000 ms 00:11:43.169 00:11:43.169 NVM Command Set Attributes 00:11:43.169 ========================== 00:11:43.169 Submission Queue Entry Size 00:11:43.169 Max: 64 00:11:43.169 Min: 64 00:11:43.169 Completion Queue Entry Size 00:11:43.169 Max: 16 00:11:43.169 Min: 16 00:11:43.169 Number of Namespaces: 32 00:11:43.169 Compare Command: Supported 00:11:43.169 Write Uncorrectable Command: Not Supported 00:11:43.169 Dataset Management Command: Supported 00:11:43.169 Write Zeroes Command: Supported 00:11:43.169 Set Features Save Field: Not Supported 00:11:43.169 Reservations: Not Supported 00:11:43.169 Timestamp: Not Supported 00:11:43.169 Copy: Supported 00:11:43.169 Volatile Write Cache: Present 00:11:43.169 Atomic Write Unit (Normal): 1 00:11:43.169 Atomic Write Unit (PFail): 1 00:11:43.169 Atomic Compare & Write Unit: 1 00:11:43.169 Fused Compare & Write: Supported 00:11:43.169 Scatter-Gather List 00:11:43.169 SGL Command Set: Supported (Dword aligned) 00:11:43.169 SGL Keyed: Not Supported 00:11:43.169 SGL Bit Bucket Descriptor: Not Supported 00:11:43.169 SGL Metadata Pointer: Not Supported 00:11:43.169 Oversized SGL: Not Supported 00:11:43.169 SGL Metadata Address: Not Supported 00:11:43.169 SGL Offset: Not Supported 00:11:43.170 Transport SGL Data Block: Not Supported 00:11:43.170 Replay Protected Memory Block: Not Supported 00:11:43.170 00:11:43.170 Firmware Slot Information 00:11:43.170 ========================= 00:11:43.170 Active slot: 1 00:11:43.170 Slot 1 Firmware Revision: 24.05 00:11:43.170 00:11:43.170 00:11:43.170 Commands Supported and Effects 00:11:43.170 ============================== 00:11:43.170 Admin Commands 00:11:43.170 -------------- 00:11:43.170 Get Log Page (02h): Supported 00:11:43.170 Identify (06h): Supported 00:11:43.170 Abort (08h): Supported 00:11:43.170 Set Features (09h): Supported 00:11:43.170 Get Features (0Ah): Supported 00:11:43.170 Asynchronous Event Request (0Ch): Supported 00:11:43.170 Keep Alive (18h): Supported 00:11:43.170 I/O Commands 00:11:43.170 ------------ 00:11:43.170 Flush (00h): Supported LBA-Change 00:11:43.170 Write (01h): Supported LBA-Change 00:11:43.170 Read (02h): Supported 00:11:43.170 Compare (05h): Supported 00:11:43.170 Write Zeroes (08h): Supported LBA-Change 00:11:43.170 Dataset Management (09h): Supported LBA-Change 00:11:43.170 Copy (19h): Supported LBA-Change 00:11:43.170 Unknown (79h): Supported LBA-Change 00:11:43.170 Unknown (7Ah): Supported 00:11:43.170 00:11:43.170 Error Log 00:11:43.170 ========= 00:11:43.170 00:11:43.170 Arbitration 00:11:43.170 =========== 00:11:43.170 Arbitration Burst: 1 00:11:43.170 00:11:43.170 Power Management 00:11:43.170 ================ 00:11:43.170 Number of Power States: 1 00:11:43.170 Current Power State: Power State #0 00:11:43.170 Power State #0: 00:11:43.170 Max Power: 0.00 W 00:11:43.170 Non-Operational State: Operational 00:11:43.170 Entry Latency: Not Reported 00:11:43.170 Exit Latency: Not Reported 00:11:43.170 Relative Read Throughput: 0 00:11:43.170 Relative Read Latency: 0 00:11:43.170 Relative Write Throughput: 0 00:11:43.170 Relative Write Latency: 0 00:11:43.170 Idle Power: Not Reported 00:11:43.170 Active Power: Not Reported 00:11:43.170 Non-Operational Permissive Mode: Not Supported 00:11:43.170 00:11:43.170 Health Information 00:11:43.170 ================== 00:11:43.170 Critical Warnings: 00:11:43.170 Available Spare Space: OK 00:11:43.170 Temperature: OK 00:11:43.170 Device Reliability: OK 00:11:43.170 Read Only: No 00:11:43.170 Volatile Memory Backup: OK 00:11:43.170 Current Temperature: 0 Kelvin (-2[2024-05-15 16:56:21.964955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:43.170 [2024-05-15 16:56:21.964963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:43.170 [2024-05-15 16:56:21.964987] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:43.170 [2024-05-15 16:56:21.964996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.170 [2024-05-15 16:56:21.965003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.170 [2024-05-15 16:56:21.965009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.170 [2024-05-15 16:56:21.965015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.170 [2024-05-15 16:56:21.965095] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:43.170 [2024-05-15 16:56:21.965106] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:43.170 [2024-05-15 16:56:21.966094] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:43.170 [2024-05-15 16:56:21.966134] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:43.170 [2024-05-15 16:56:21.966141] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:43.170 [2024-05-15 16:56:21.967105] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:43.170 [2024-05-15 16:56:21.967117] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:43.170 [2024-05-15 16:56:21.967182] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:43.170 [2024-05-15 16:56:21.973553] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:43.432 73 Celsius) 00:11:43.432 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:43.432 Available Spare: 0% 00:11:43.432 Available Spare Threshold: 0% 00:11:43.432 Life Percentage Used: 0% 00:11:43.432 Data Units Read: 0 00:11:43.432 Data Units Written: 0 00:11:43.432 Host Read Commands: 0 00:11:43.432 Host Write Commands: 0 00:11:43.432 Controller Busy Time: 0 minutes 00:11:43.432 Power Cycles: 0 00:11:43.432 Power On Hours: 0 hours 00:11:43.432 Unsafe Shutdowns: 0 00:11:43.432 Unrecoverable Media Errors: 0 00:11:43.432 Lifetime Error Log Entries: 0 00:11:43.432 Warning Temperature Time: 0 minutes 00:11:43.432 Critical Temperature Time: 0 minutes 00:11:43.432 00:11:43.432 Number of Queues 00:11:43.432 ================ 00:11:43.432 Number of I/O Submission Queues: 127 00:11:43.432 Number of I/O Completion Queues: 127 00:11:43.432 00:11:43.432 Active Namespaces 00:11:43.432 ================= 00:11:43.432 Namespace ID:1 00:11:43.432 Error Recovery Timeout: Unlimited 00:11:43.432 Command Set Identifier: NVM (00h) 00:11:43.432 Deallocate: Supported 00:11:43.432 Deallocated/Unwritten Error: Not Supported 00:11:43.432 Deallocated Read Value: Unknown 00:11:43.432 Deallocate in Write Zeroes: Not Supported 00:11:43.432 Deallocated Guard Field: 0xFFFF 00:11:43.432 Flush: Supported 00:11:43.432 Reservation: Supported 00:11:43.432 Namespace Sharing Capabilities: Multiple Controllers 00:11:43.432 Size (in LBAs): 131072 (0GiB) 00:11:43.432 Capacity (in LBAs): 131072 (0GiB) 00:11:43.432 Utilization (in LBAs): 131072 (0GiB) 00:11:43.432 NGUID: 777D4668D8E541F8BDF5369A54EB4D71 00:11:43.432 UUID: 777d4668-d8e5-41f8-bdf5-369a54eb4d71 00:11:43.432 Thin Provisioning: Not Supported 00:11:43.432 Per-NS Atomic Units: Yes 00:11:43.432 Atomic Boundary Size (Normal): 0 00:11:43.432 Atomic Boundary Size (PFail): 0 00:11:43.432 Atomic Boundary Offset: 0 00:11:43.432 Maximum Single Source Range Length: 65535 00:11:43.432 Maximum Copy Length: 65535 00:11:43.432 Maximum Source Range Count: 1 00:11:43.432 NGUID/EUI64 Never Reused: No 00:11:43.432 Namespace Write Protected: No 00:11:43.432 Number of LBA Formats: 1 00:11:43.432 Current LBA Format: LBA Format #00 00:11:43.432 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:43.432 00:11:43.432 16:56:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:43.432 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.432 [2024-05-15 16:56:22.158179] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:48.740 Initializing NVMe Controllers 00:11:48.740 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:48.740 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:48.740 Initialization complete. Launching workers. 00:11:48.740 ======================================================== 00:11:48.740 Latency(us) 00:11:48.740 Device Information : IOPS MiB/s Average min max 00:11:48.740 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40030.14 156.37 3197.45 832.99 6819.15 00:11:48.740 ======================================================== 00:11:48.740 Total : 40030.14 156.37 3197.45 832.99 6819.15 00:11:48.740 00:11:48.740 [2024-05-15 16:56:27.177009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:48.740 16:56:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:48.740 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.740 [2024-05-15 16:56:27.358864] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:54.033 Initializing NVMe Controllers 00:11:54.033 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:54.033 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:54.033 Initialization complete. Launching workers. 00:11:54.033 ======================================================== 00:11:54.033 Latency(us) 00:11:54.033 Device Information : IOPS MiB/s Average min max 00:11:54.033 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.96 62.72 7977.67 5985.68 9979.77 00:11:54.033 ======================================================== 00:11:54.033 Total : 16055.96 62.72 7977.67 5985.68 9979.77 00:11:54.033 00:11:54.033 [2024-05-15 16:56:32.399072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:54.033 16:56:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:54.033 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.033 [2024-05-15 16:56:32.592954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:59.329 [2024-05-15 16:56:37.655739] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:59.329 Initializing NVMe Controllers 00:11:59.329 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:59.329 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:59.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:59.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:59.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:59.329 Initialization complete. Launching workers. 00:11:59.329 Starting thread on core 2 00:11:59.329 Starting thread on core 3 00:11:59.329 Starting thread on core 1 00:11:59.329 16:56:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:59.329 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.329 [2024-05-15 16:56:37.907305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:02.626 [2024-05-15 16:56:40.958274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:02.626 Initializing NVMe Controllers 00:12:02.626 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:02.626 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:02.626 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:02.626 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:02.626 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:02.626 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:02.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:02.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:02.626 Initialization complete. Launching workers. 00:12:02.626 Starting thread on core 1 with urgent priority queue 00:12:02.626 Starting thread on core 2 with urgent priority queue 00:12:02.626 Starting thread on core 3 with urgent priority queue 00:12:02.626 Starting thread on core 0 with urgent priority queue 00:12:02.626 SPDK bdev Controller (SPDK1 ) core 0: 8859.67 IO/s 11.29 secs/100000 ios 00:12:02.626 SPDK bdev Controller (SPDK1 ) core 1: 15020.33 IO/s 6.66 secs/100000 ios 00:12:02.626 SPDK bdev Controller (SPDK1 ) core 2: 8102.33 IO/s 12.34 secs/100000 ios 00:12:02.626 SPDK bdev Controller (SPDK1 ) core 3: 14280.67 IO/s 7.00 secs/100000 ios 00:12:02.626 ======================================================== 00:12:02.626 00:12:02.626 16:56:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:02.626 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.626 [2024-05-15 16:56:41.223046] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:02.626 Initializing NVMe Controllers 00:12:02.626 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:02.626 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:02.626 Namespace ID: 1 size: 0GB 00:12:02.626 Initialization complete. 00:12:02.626 INFO: using host memory buffer for IO 00:12:02.626 Hello world! 00:12:02.626 [2024-05-15 16:56:41.257265] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:02.626 16:56:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:02.626 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.913 [2024-05-15 16:56:41.516947] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:03.893 Initializing NVMe Controllers 00:12:03.893 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:03.893 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:03.893 Initialization complete. Launching workers. 00:12:03.893 submit (in ns) avg, min, max = 8293.7, 3970.0, 4114993.3 00:12:03.893 complete (in ns) avg, min, max = 16209.2, 2383.3, 5993160.0 00:12:03.893 00:12:03.893 Submit histogram 00:12:03.893 ================ 00:12:03.893 Range in us Cumulative Count 00:12:03.893 3.947 - 3.973: 0.0052% ( 1) 00:12:03.893 3.973 - 4.000: 1.9308% ( 370) 00:12:03.893 4.000 - 4.027: 9.0710% ( 1372) 00:12:03.893 4.027 - 4.053: 20.3903% ( 2175) 00:12:03.893 4.053 - 4.080: 32.3237% ( 2293) 00:12:03.893 4.080 - 4.107: 42.5397% ( 1963) 00:12:03.893 4.107 - 4.133: 54.2649% ( 2253) 00:12:03.893 4.133 - 4.160: 69.7892% ( 2983) 00:12:03.893 4.160 - 4.187: 84.1426% ( 2758) 00:12:03.893 4.187 - 4.213: 93.2397% ( 1748) 00:12:03.893 4.213 - 4.240: 97.2313% ( 767) 00:12:03.893 4.240 - 4.267: 98.8238% ( 306) 00:12:03.893 4.267 - 4.293: 99.3078% ( 93) 00:12:03.893 4.293 - 4.320: 99.5004% ( 37) 00:12:03.893 4.320 - 4.347: 99.5316% ( 6) 00:12:03.893 4.347 - 4.373: 99.5368% ( 1) 00:12:03.893 4.373 - 4.400: 99.5420% ( 1) 00:12:03.893 4.400 - 4.427: 99.5472% ( 1) 00:12:03.893 4.427 - 4.453: 99.5524% ( 1) 00:12:03.893 4.587 - 4.613: 99.5576% ( 1) 00:12:03.893 4.640 - 4.667: 99.5628% ( 1) 00:12:03.893 4.667 - 4.693: 99.5680% ( 1) 00:12:03.893 4.987 - 5.013: 99.5733% ( 1) 00:12:03.893 5.067 - 5.093: 99.5785% ( 1) 00:12:03.893 6.080 - 6.107: 99.5837% ( 1) 00:12:03.893 6.133 - 6.160: 99.5941% ( 2) 00:12:03.893 6.187 - 6.213: 99.5993% ( 1) 00:12:03.893 6.213 - 6.240: 99.6097% ( 2) 00:12:03.893 6.267 - 6.293: 99.6201% ( 2) 00:12:03.893 6.293 - 6.320: 99.6305% ( 2) 00:12:03.893 6.320 - 6.347: 99.6409% ( 2) 00:12:03.893 6.347 - 6.373: 99.6513% ( 2) 00:12:03.893 6.373 - 6.400: 99.6669% ( 3) 00:12:03.893 6.400 - 6.427: 99.6721% ( 1) 00:12:03.893 6.427 - 6.453: 99.6825% ( 2) 00:12:03.893 6.453 - 6.480: 99.6877% ( 1) 00:12:03.893 6.480 - 6.507: 99.6982% ( 2) 00:12:03.893 6.507 - 6.533: 99.7086% ( 2) 00:12:03.893 6.533 - 6.560: 99.7190% ( 2) 00:12:03.893 6.613 - 6.640: 99.7294% ( 2) 00:12:03.893 6.667 - 6.693: 99.7398% ( 2) 00:12:03.893 6.693 - 6.720: 99.7450% ( 1) 00:12:03.893 6.720 - 6.747: 99.7502% ( 1) 00:12:03.893 6.747 - 6.773: 99.7554% ( 1) 00:12:03.893 6.827 - 6.880: 99.7658% ( 2) 00:12:03.893 6.880 - 6.933: 99.7710% ( 1) 00:12:03.893 6.933 - 6.987: 99.7762% ( 1) 00:12:03.893 6.987 - 7.040: 99.7814% ( 1) 00:12:03.893 7.040 - 7.093: 99.7866% ( 1) 00:12:03.893 7.147 - 7.200: 99.8022% ( 3) 00:12:03.893 7.307 - 7.360: 99.8074% ( 1) 00:12:03.893 7.467 - 7.520: 99.8126% ( 1) 00:12:03.893 7.520 - 7.573: 99.8231% ( 2) 00:12:03.893 7.840 - 7.893: 99.8283% ( 1) 00:12:03.893 7.947 - 8.000: 99.8387% ( 2) 00:12:03.893 8.000 - 8.053: 99.8439% ( 1) 00:12:03.893 8.053 - 8.107: 99.8491% ( 1) 00:12:03.893 8.160 - 8.213: 99.8543% ( 1) 00:12:03.893 8.320 - 8.373: 99.8595% ( 1) 00:12:03.893 8.427 - 8.480: 99.8647% ( 1) 00:12:03.893 8.533 - 8.587: 99.8699% ( 1) 00:12:03.893 8.747 - 8.800: 99.8751% ( 1) 00:12:03.893 8.907 - 8.960: 99.8803% ( 1) 00:12:03.893 8.960 - 9.013: 99.8855% ( 1) 00:12:03.893 12.693 - 12.747: 99.8907% ( 1) 00:12:03.893 20.693 - 20.800: 99.8959% ( 1) 00:12:03.893 3986.773 - 4014.080: 99.9896% ( 18) 00:12:03.893 4014.080 - 4041.387: 99.9948% ( 1) 00:12:03.893 4096.000 - 4123.307: 100.0000% ( 1) 00:12:03.893 00:12:03.893 Complete histogram 00:12:03.893 ================== 00:12:03.893 Range in us Cumulative Count 00:12:03.893 2.373 - 2.387: 0.0104% ( 2) 00:12:03.893 2.387 - 2.400: 0.5464% ( 103) 00:12:03.893 2.400 - 2.413: 1.4260% ( 169) 00:12:03.893 2.413 - 2.427: 1.6393% ( 41) 00:12:03.893 2.427 - 2.440: 1.8683% ( 44) 00:12:03.893 2.440 - 2.453: 2.0141% ( 28) 00:12:03.893 2.453 - 2.467: 37.9183% ( 6899) 00:12:03.893 2.467 - 2.480: 54.0775% ( 3105) 00:12:03.893 2.480 - [2024-05-15 16:56:42.537568] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:03.893 2.493: 66.4741% ( 2382) 00:12:03.893 2.493 - 2.507: 75.1444% ( 1666) 00:12:03.893 2.507 - 2.520: 80.3747% ( 1005) 00:12:03.893 2.520 - 2.533: 83.5961% ( 619) 00:12:03.893 2.533 - 2.547: 89.3729% ( 1110) 00:12:03.893 2.547 - 2.560: 94.0411% ( 897) 00:12:03.893 2.560 - 2.573: 96.3986% ( 453) 00:12:03.893 2.573 - 2.587: 98.0952% ( 326) 00:12:03.893 2.587 - 2.600: 99.0684% ( 187) 00:12:03.893 2.600 - 2.613: 99.2818% ( 41) 00:12:03.893 2.613 - 2.627: 99.3599% ( 15) 00:12:03.893 2.627 - 2.640: 99.3807% ( 4) 00:12:03.893 4.587 - 4.613: 99.3911% ( 2) 00:12:03.893 4.613 - 4.640: 99.3963% ( 1) 00:12:03.893 4.640 - 4.667: 99.4015% ( 1) 00:12:03.893 4.693 - 4.720: 99.4171% ( 3) 00:12:03.893 4.720 - 4.747: 99.4223% ( 1) 00:12:03.893 4.747 - 4.773: 99.4275% ( 1) 00:12:03.893 4.800 - 4.827: 99.4327% ( 1) 00:12:03.893 4.827 - 4.853: 99.4379% ( 1) 00:12:03.893 4.853 - 4.880: 99.4431% ( 1) 00:12:03.893 4.880 - 4.907: 99.4483% ( 1) 00:12:03.893 4.907 - 4.933: 99.4692% ( 4) 00:12:03.893 4.933 - 4.960: 99.4744% ( 1) 00:12:03.893 5.013 - 5.040: 99.4796% ( 1) 00:12:03.893 5.040 - 5.067: 99.4848% ( 1) 00:12:03.893 5.067 - 5.093: 99.4900% ( 1) 00:12:03.893 5.173 - 5.200: 99.4952% ( 1) 00:12:03.893 5.333 - 5.360: 99.5004% ( 1) 00:12:03.893 5.360 - 5.387: 99.5056% ( 1) 00:12:03.893 5.413 - 5.440: 99.5108% ( 1) 00:12:03.893 5.547 - 5.573: 99.5212% ( 2) 00:12:03.893 5.600 - 5.627: 99.5316% ( 2) 00:12:03.893 5.627 - 5.653: 99.5420% ( 2) 00:12:03.893 5.653 - 5.680: 99.5524% ( 2) 00:12:03.893 5.680 - 5.707: 99.5576% ( 1) 00:12:03.893 5.787 - 5.813: 99.5628% ( 1) 00:12:03.893 5.920 - 5.947: 99.5680% ( 1) 00:12:03.893 6.107 - 6.133: 99.5733% ( 1) 00:12:03.893 6.213 - 6.240: 99.5785% ( 1) 00:12:03.893 6.240 - 6.267: 99.5837% ( 1) 00:12:03.893 6.373 - 6.400: 99.5889% ( 1) 00:12:03.893 6.427 - 6.453: 99.5941% ( 1) 00:12:03.893 6.507 - 6.533: 99.5993% ( 1) 00:12:03.893 6.613 - 6.640: 99.6045% ( 1) 00:12:03.893 6.987 - 7.040: 99.6097% ( 1) 00:12:03.893 7.520 - 7.573: 99.6149% ( 1) 00:12:03.893 7.840 - 7.893: 99.6201% ( 1) 00:12:03.893 8.160 - 8.213: 99.6253% ( 1) 00:12:03.893 8.693 - 8.747: 99.6305% ( 1) 00:12:03.893 11.040 - 11.093: 99.6357% ( 1) 00:12:03.893 12.640 - 12.693: 99.6409% ( 1) 00:12:03.893 13.973 - 14.080: 99.6461% ( 1) 00:12:03.893 38.400 - 38.613: 99.6513% ( 1) 00:12:03.893 44.800 - 45.013: 99.6565% ( 1) 00:12:03.893 1733.973 - 1740.800: 99.6617% ( 1) 00:12:03.893 3986.773 - 4014.080: 99.9948% ( 64) 00:12:03.893 5980.160 - 6007.467: 100.0000% ( 1) 00:12:03.893 00:12:03.893 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:03.893 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:03.893 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:03.893 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:03.893 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:04.154 [ 00:12:04.154 { 00:12:04.154 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:04.154 "subtype": "Discovery", 00:12:04.154 "listen_addresses": [], 00:12:04.154 "allow_any_host": true, 00:12:04.154 "hosts": [] 00:12:04.154 }, 00:12:04.154 { 00:12:04.154 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:04.154 "subtype": "NVMe", 00:12:04.154 "listen_addresses": [ 00:12:04.154 { 00:12:04.154 "trtype": "VFIOUSER", 00:12:04.154 "adrfam": "IPv4", 00:12:04.154 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:04.154 "trsvcid": "0" 00:12:04.154 } 00:12:04.154 ], 00:12:04.154 "allow_any_host": true, 00:12:04.154 "hosts": [], 00:12:04.154 "serial_number": "SPDK1", 00:12:04.154 "model_number": "SPDK bdev Controller", 00:12:04.154 "max_namespaces": 32, 00:12:04.154 "min_cntlid": 1, 00:12:04.154 "max_cntlid": 65519, 00:12:04.154 "namespaces": [ 00:12:04.154 { 00:12:04.154 "nsid": 1, 00:12:04.154 "bdev_name": "Malloc1", 00:12:04.154 "name": "Malloc1", 00:12:04.154 "nguid": "777D4668D8E541F8BDF5369A54EB4D71", 00:12:04.154 "uuid": "777d4668-d8e5-41f8-bdf5-369a54eb4d71" 00:12:04.154 } 00:12:04.154 ] 00:12:04.154 }, 00:12:04.154 { 00:12:04.154 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:04.154 "subtype": "NVMe", 00:12:04.154 "listen_addresses": [ 00:12:04.154 { 00:12:04.154 "trtype": "VFIOUSER", 00:12:04.154 "adrfam": "IPv4", 00:12:04.154 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:04.154 "trsvcid": "0" 00:12:04.154 } 00:12:04.154 ], 00:12:04.154 "allow_any_host": true, 00:12:04.154 "hosts": [], 00:12:04.154 "serial_number": "SPDK2", 00:12:04.154 "model_number": "SPDK bdev Controller", 00:12:04.154 "max_namespaces": 32, 00:12:04.154 "min_cntlid": 1, 00:12:04.154 "max_cntlid": 65519, 00:12:04.154 "namespaces": [ 00:12:04.154 { 00:12:04.154 "nsid": 1, 00:12:04.154 "bdev_name": "Malloc2", 00:12:04.154 "name": "Malloc2", 00:12:04.154 "nguid": "758C06AEB0AA4C46B153A43A3B50FDE0", 00:12:04.154 "uuid": "758c06ae-b0aa-4c46-b153-a43a3b50fde0" 00:12:04.154 } 00:12:04.154 ] 00:12:04.154 } 00:12:04.154 ] 00:12:04.154 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:04.154 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1371120 00:12:04.154 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:04.154 16:56:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:12:04.154 16:56:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:04.154 16:56:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:04.154 16:56:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:12:04.154 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:04.154 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:04.154 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:04.154 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.154 Malloc3 00:12:04.154 [2024-05-15 16:56:42.935792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:04.154 16:56:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:04.414 [2024-05-15 16:56:43.106883] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:04.414 16:56:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:04.414 Asynchronous Event Request test 00:12:04.414 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.414 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:04.414 Registering asynchronous event callbacks... 00:12:04.414 Starting namespace attribute notice tests for all controllers... 00:12:04.414 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:04.414 aer_cb - Changed Namespace 00:12:04.414 Cleaning up... 00:12:04.676 [ 00:12:04.676 { 00:12:04.676 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:04.676 "subtype": "Discovery", 00:12:04.676 "listen_addresses": [], 00:12:04.676 "allow_any_host": true, 00:12:04.676 "hosts": [] 00:12:04.676 }, 00:12:04.676 { 00:12:04.676 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:04.676 "subtype": "NVMe", 00:12:04.676 "listen_addresses": [ 00:12:04.676 { 00:12:04.676 "trtype": "VFIOUSER", 00:12:04.676 "adrfam": "IPv4", 00:12:04.676 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:04.676 "trsvcid": "0" 00:12:04.676 } 00:12:04.676 ], 00:12:04.676 "allow_any_host": true, 00:12:04.676 "hosts": [], 00:12:04.676 "serial_number": "SPDK1", 00:12:04.676 "model_number": "SPDK bdev Controller", 00:12:04.676 "max_namespaces": 32, 00:12:04.676 "min_cntlid": 1, 00:12:04.676 "max_cntlid": 65519, 00:12:04.676 "namespaces": [ 00:12:04.676 { 00:12:04.676 "nsid": 1, 00:12:04.676 "bdev_name": "Malloc1", 00:12:04.676 "name": "Malloc1", 00:12:04.676 "nguid": "777D4668D8E541F8BDF5369A54EB4D71", 00:12:04.676 "uuid": "777d4668-d8e5-41f8-bdf5-369a54eb4d71" 00:12:04.676 }, 00:12:04.676 { 00:12:04.676 "nsid": 2, 00:12:04.676 "bdev_name": "Malloc3", 00:12:04.676 "name": "Malloc3", 00:12:04.676 "nguid": "B2DF97F5479D471E8FA7C175F68076E0", 00:12:04.676 "uuid": "b2df97f5-479d-471e-8fa7-c175f68076e0" 00:12:04.676 } 00:12:04.676 ] 00:12:04.676 }, 00:12:04.676 { 00:12:04.676 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:04.676 "subtype": "NVMe", 00:12:04.676 "listen_addresses": [ 00:12:04.676 { 00:12:04.676 "trtype": "VFIOUSER", 00:12:04.676 "adrfam": "IPv4", 00:12:04.676 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:04.676 "trsvcid": "0" 00:12:04.676 } 00:12:04.676 ], 00:12:04.676 "allow_any_host": true, 00:12:04.676 "hosts": [], 00:12:04.676 "serial_number": "SPDK2", 00:12:04.676 "model_number": "SPDK bdev Controller", 00:12:04.676 "max_namespaces": 32, 00:12:04.676 "min_cntlid": 1, 00:12:04.676 "max_cntlid": 65519, 00:12:04.676 "namespaces": [ 00:12:04.676 { 00:12:04.676 "nsid": 1, 00:12:04.676 "bdev_name": "Malloc2", 00:12:04.676 "name": "Malloc2", 00:12:04.676 "nguid": "758C06AEB0AA4C46B153A43A3B50FDE0", 00:12:04.676 "uuid": "758c06ae-b0aa-4c46-b153-a43a3b50fde0" 00:12:04.676 } 00:12:04.676 ] 00:12:04.676 } 00:12:04.676 ] 00:12:04.676 16:56:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1371120 00:12:04.676 16:56:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:04.676 16:56:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:04.676 16:56:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:04.676 16:56:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:04.676 [2024-05-15 16:56:43.330793] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:12:04.676 [2024-05-15 16:56:43.330835] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1371343 ] 00:12:04.676 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.676 [2024-05-15 16:56:43.364078] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:04.676 [2024-05-15 16:56:43.370785] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:04.676 [2024-05-15 16:56:43.370806] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1f605e7000 00:12:04.676 [2024-05-15 16:56:43.371784] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:04.676 [2024-05-15 16:56:43.372787] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:04.676 [2024-05-15 16:56:43.373791] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:04.676 [2024-05-15 16:56:43.374793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:04.676 [2024-05-15 16:56:43.375796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:04.676 [2024-05-15 16:56:43.376799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:04.677 [2024-05-15 16:56:43.377807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:04.677 [2024-05-15 16:56:43.378818] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:04.677 [2024-05-15 16:56:43.379832] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:04.677 [2024-05-15 16:56:43.379845] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1f605dc000 00:12:04.677 [2024-05-15 16:56:43.381169] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:04.677 [2024-05-15 16:56:43.399701] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:04.677 [2024-05-15 16:56:43.399725] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:04.677 [2024-05-15 16:56:43.404799] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:04.677 [2024-05-15 16:56:43.404842] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:04.677 [2024-05-15 16:56:43.404921] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:04.677 [2024-05-15 16:56:43.404933] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:04.677 [2024-05-15 16:56:43.404939] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:04.677 [2024-05-15 16:56:43.405804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:04.677 [2024-05-15 16:56:43.405813] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:04.677 [2024-05-15 16:56:43.405819] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:04.677 [2024-05-15 16:56:43.406809] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:04.677 [2024-05-15 16:56:43.406817] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:04.677 [2024-05-15 16:56:43.406824] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:04.677 [2024-05-15 16:56:43.407816] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:04.677 [2024-05-15 16:56:43.407825] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:04.677 [2024-05-15 16:56:43.408824] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:04.677 [2024-05-15 16:56:43.408837] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:04.677 [2024-05-15 16:56:43.408842] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:04.677 [2024-05-15 16:56:43.408849] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:04.677 [2024-05-15 16:56:43.408954] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:04.677 [2024-05-15 16:56:43.408959] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:04.677 [2024-05-15 16:56:43.408963] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:04.677 [2024-05-15 16:56:43.409834] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:04.677 [2024-05-15 16:56:43.410839] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:04.677 [2024-05-15 16:56:43.411851] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:04.677 [2024-05-15 16:56:43.412853] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:04.677 [2024-05-15 16:56:43.412893] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:04.677 [2024-05-15 16:56:43.413860] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:04.677 [2024-05-15 16:56:43.413869] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:04.677 [2024-05-15 16:56:43.413873] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.413894] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:04.677 [2024-05-15 16:56:43.413907] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.413920] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:04.677 [2024-05-15 16:56:43.413925] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:04.677 [2024-05-15 16:56:43.413938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:04.677 [2024-05-15 16:56:43.420553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:04.677 [2024-05-15 16:56:43.420565] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:04.677 [2024-05-15 16:56:43.420569] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:04.677 [2024-05-15 16:56:43.420574] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:04.677 [2024-05-15 16:56:43.420578] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:04.677 [2024-05-15 16:56:43.420583] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:04.677 [2024-05-15 16:56:43.420590] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:04.677 [2024-05-15 16:56:43.420594] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.420604] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.420615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:04.677 [2024-05-15 16:56:43.428551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:04.677 [2024-05-15 16:56:43.428564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.677 [2024-05-15 16:56:43.428573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.677 [2024-05-15 16:56:43.428581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.677 [2024-05-15 16:56:43.428589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.677 [2024-05-15 16:56:43.428594] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.428600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.428609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:04.677 [2024-05-15 16:56:43.436550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:04.677 [2024-05-15 16:56:43.436557] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:04.677 [2024-05-15 16:56:43.436564] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.436571] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.436576] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.436585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:04.677 [2024-05-15 16:56:43.444550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:04.677 [2024-05-15 16:56:43.444602] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.444610] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.444617] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:04.677 [2024-05-15 16:56:43.444622] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:04.677 [2024-05-15 16:56:43.444628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:04.677 [2024-05-15 16:56:43.452552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:04.677 [2024-05-15 16:56:43.452566] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:04.677 [2024-05-15 16:56:43.452574] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.452581] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.452588] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:04.677 [2024-05-15 16:56:43.452592] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:04.677 [2024-05-15 16:56:43.452599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:04.677 [2024-05-15 16:56:43.460551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:04.677 [2024-05-15 16:56:43.460562] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.460570] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:04.677 [2024-05-15 16:56:43.460577] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:04.677 [2024-05-15 16:56:43.460581] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:04.678 [2024-05-15 16:56:43.460587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:04.678 [2024-05-15 16:56:43.468551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:04.678 [2024-05-15 16:56:43.468565] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:04.678 [2024-05-15 16:56:43.468572] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:04.678 [2024-05-15 16:56:43.468578] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:04.678 [2024-05-15 16:56:43.468584] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:04.678 [2024-05-15 16:56:43.468589] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:04.678 [2024-05-15 16:56:43.468594] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:04.678 [2024-05-15 16:56:43.468598] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:04.678 [2024-05-15 16:56:43.468603] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:04.678 [2024-05-15 16:56:43.468621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:04.678 [2024-05-15 16:56:43.476553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:04.678 [2024-05-15 16:56:43.476566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:04.678 [2024-05-15 16:56:43.484551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:04.678 [2024-05-15 16:56:43.484566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:04.678 [2024-05-15 16:56:43.492549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:04.678 [2024-05-15 16:56:43.492562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:04.678 [2024-05-15 16:56:43.500550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:04.678 [2024-05-15 16:56:43.500563] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:04.678 [2024-05-15 16:56:43.500568] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:04.678 [2024-05-15 16:56:43.500571] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:04.678 [2024-05-15 16:56:43.500575] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:04.678 [2024-05-15 16:56:43.500581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:04.678 [2024-05-15 16:56:43.500589] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:04.678 [2024-05-15 16:56:43.500593] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:04.678 [2024-05-15 16:56:43.500599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:04.678 [2024-05-15 16:56:43.500606] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:04.678 [2024-05-15 16:56:43.500610] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:04.678 [2024-05-15 16:56:43.500616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:04.678 [2024-05-15 16:56:43.500626] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:04.678 [2024-05-15 16:56:43.500630] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:04.678 [2024-05-15 16:56:43.500636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:04.678 [2024-05-15 16:56:43.508552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:04.678 [2024-05-15 16:56:43.508567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:04.678 [2024-05-15 16:56:43.508576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:04.678 [2024-05-15 16:56:43.508584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:04.678 ===================================================== 00:12:04.678 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:04.678 ===================================================== 00:12:04.678 Controller Capabilities/Features 00:12:04.678 ================================ 00:12:04.678 Vendor ID: 4e58 00:12:04.678 Subsystem Vendor ID: 4e58 00:12:04.678 Serial Number: SPDK2 00:12:04.678 Model Number: SPDK bdev Controller 00:12:04.678 Firmware Version: 24.05 00:12:04.678 Recommended Arb Burst: 6 00:12:04.678 IEEE OUI Identifier: 8d 6b 50 00:12:04.678 Multi-path I/O 00:12:04.678 May have multiple subsystem ports: Yes 00:12:04.678 May have multiple controllers: Yes 00:12:04.678 Associated with SR-IOV VF: No 00:12:04.678 Max Data Transfer Size: 131072 00:12:04.678 Max Number of Namespaces: 32 00:12:04.678 Max Number of I/O Queues: 127 00:12:04.678 NVMe Specification Version (VS): 1.3 00:12:04.678 NVMe Specification Version (Identify): 1.3 00:12:04.678 Maximum Queue Entries: 256 00:12:04.678 Contiguous Queues Required: Yes 00:12:04.678 Arbitration Mechanisms Supported 00:12:04.678 Weighted Round Robin: Not Supported 00:12:04.678 Vendor Specific: Not Supported 00:12:04.678 Reset Timeout: 15000 ms 00:12:04.678 Doorbell Stride: 4 bytes 00:12:04.678 NVM Subsystem Reset: Not Supported 00:12:04.678 Command Sets Supported 00:12:04.678 NVM Command Set: Supported 00:12:04.678 Boot Partition: Not Supported 00:12:04.678 Memory Page Size Minimum: 4096 bytes 00:12:04.678 Memory Page Size Maximum: 4096 bytes 00:12:04.678 Persistent Memory Region: Not Supported 00:12:04.678 Optional Asynchronous Events Supported 00:12:04.678 Namespace Attribute Notices: Supported 00:12:04.678 Firmware Activation Notices: Not Supported 00:12:04.678 ANA Change Notices: Not Supported 00:12:04.678 PLE Aggregate Log Change Notices: Not Supported 00:12:04.678 LBA Status Info Alert Notices: Not Supported 00:12:04.678 EGE Aggregate Log Change Notices: Not Supported 00:12:04.678 Normal NVM Subsystem Shutdown event: Not Supported 00:12:04.678 Zone Descriptor Change Notices: Not Supported 00:12:04.678 Discovery Log Change Notices: Not Supported 00:12:04.678 Controller Attributes 00:12:04.678 128-bit Host Identifier: Supported 00:12:04.678 Non-Operational Permissive Mode: Not Supported 00:12:04.678 NVM Sets: Not Supported 00:12:04.678 Read Recovery Levels: Not Supported 00:12:04.678 Endurance Groups: Not Supported 00:12:04.678 Predictable Latency Mode: Not Supported 00:12:04.678 Traffic Based Keep ALive: Not Supported 00:12:04.678 Namespace Granularity: Not Supported 00:12:04.678 SQ Associations: Not Supported 00:12:04.678 UUID List: Not Supported 00:12:04.678 Multi-Domain Subsystem: Not Supported 00:12:04.678 Fixed Capacity Management: Not Supported 00:12:04.678 Variable Capacity Management: Not Supported 00:12:04.678 Delete Endurance Group: Not Supported 00:12:04.678 Delete NVM Set: Not Supported 00:12:04.678 Extended LBA Formats Supported: Not Supported 00:12:04.678 Flexible Data Placement Supported: Not Supported 00:12:04.678 00:12:04.678 Controller Memory Buffer Support 00:12:04.678 ================================ 00:12:04.678 Supported: No 00:12:04.678 00:12:04.678 Persistent Memory Region Support 00:12:04.678 ================================ 00:12:04.678 Supported: No 00:12:04.678 00:12:04.678 Admin Command Set Attributes 00:12:04.678 ============================ 00:12:04.678 Security Send/Receive: Not Supported 00:12:04.678 Format NVM: Not Supported 00:12:04.678 Firmware Activate/Download: Not Supported 00:12:04.678 Namespace Management: Not Supported 00:12:04.678 Device Self-Test: Not Supported 00:12:04.678 Directives: Not Supported 00:12:04.678 NVMe-MI: Not Supported 00:12:04.678 Virtualization Management: Not Supported 00:12:04.678 Doorbell Buffer Config: Not Supported 00:12:04.678 Get LBA Status Capability: Not Supported 00:12:04.678 Command & Feature Lockdown Capability: Not Supported 00:12:04.678 Abort Command Limit: 4 00:12:04.678 Async Event Request Limit: 4 00:12:04.678 Number of Firmware Slots: N/A 00:12:04.678 Firmware Slot 1 Read-Only: N/A 00:12:04.678 Firmware Activation Without Reset: N/A 00:12:04.678 Multiple Update Detection Support: N/A 00:12:04.678 Firmware Update Granularity: No Information Provided 00:12:04.678 Per-Namespace SMART Log: No 00:12:04.678 Asymmetric Namespace Access Log Page: Not Supported 00:12:04.678 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:04.678 Command Effects Log Page: Supported 00:12:04.678 Get Log Page Extended Data: Supported 00:12:04.678 Telemetry Log Pages: Not Supported 00:12:04.678 Persistent Event Log Pages: Not Supported 00:12:04.678 Supported Log Pages Log Page: May Support 00:12:04.678 Commands Supported & Effects Log Page: Not Supported 00:12:04.678 Feature Identifiers & Effects Log Page:May Support 00:12:04.678 NVMe-MI Commands & Effects Log Page: May Support 00:12:04.678 Data Area 4 for Telemetry Log: Not Supported 00:12:04.678 Error Log Page Entries Supported: 128 00:12:04.678 Keep Alive: Supported 00:12:04.678 Keep Alive Granularity: 10000 ms 00:12:04.678 00:12:04.678 NVM Command Set Attributes 00:12:04.678 ========================== 00:12:04.678 Submission Queue Entry Size 00:12:04.678 Max: 64 00:12:04.678 Min: 64 00:12:04.678 Completion Queue Entry Size 00:12:04.678 Max: 16 00:12:04.678 Min: 16 00:12:04.678 Number of Namespaces: 32 00:12:04.678 Compare Command: Supported 00:12:04.678 Write Uncorrectable Command: Not Supported 00:12:04.679 Dataset Management Command: Supported 00:12:04.679 Write Zeroes Command: Supported 00:12:04.679 Set Features Save Field: Not Supported 00:12:04.679 Reservations: Not Supported 00:12:04.679 Timestamp: Not Supported 00:12:04.679 Copy: Supported 00:12:04.679 Volatile Write Cache: Present 00:12:04.679 Atomic Write Unit (Normal): 1 00:12:04.679 Atomic Write Unit (PFail): 1 00:12:04.679 Atomic Compare & Write Unit: 1 00:12:04.679 Fused Compare & Write: Supported 00:12:04.679 Scatter-Gather List 00:12:04.679 SGL Command Set: Supported (Dword aligned) 00:12:04.679 SGL Keyed: Not Supported 00:12:04.679 SGL Bit Bucket Descriptor: Not Supported 00:12:04.679 SGL Metadata Pointer: Not Supported 00:12:04.679 Oversized SGL: Not Supported 00:12:04.679 SGL Metadata Address: Not Supported 00:12:04.679 SGL Offset: Not Supported 00:12:04.679 Transport SGL Data Block: Not Supported 00:12:04.679 Replay Protected Memory Block: Not Supported 00:12:04.679 00:12:04.679 Firmware Slot Information 00:12:04.679 ========================= 00:12:04.679 Active slot: 1 00:12:04.679 Slot 1 Firmware Revision: 24.05 00:12:04.679 00:12:04.679 00:12:04.679 Commands Supported and Effects 00:12:04.679 ============================== 00:12:04.679 Admin Commands 00:12:04.679 -------------- 00:12:04.679 Get Log Page (02h): Supported 00:12:04.679 Identify (06h): Supported 00:12:04.679 Abort (08h): Supported 00:12:04.679 Set Features (09h): Supported 00:12:04.679 Get Features (0Ah): Supported 00:12:04.679 Asynchronous Event Request (0Ch): Supported 00:12:04.679 Keep Alive (18h): Supported 00:12:04.679 I/O Commands 00:12:04.679 ------------ 00:12:04.679 Flush (00h): Supported LBA-Change 00:12:04.679 Write (01h): Supported LBA-Change 00:12:04.679 Read (02h): Supported 00:12:04.679 Compare (05h): Supported 00:12:04.679 Write Zeroes (08h): Supported LBA-Change 00:12:04.679 Dataset Management (09h): Supported LBA-Change 00:12:04.679 Copy (19h): Supported LBA-Change 00:12:04.679 Unknown (79h): Supported LBA-Change 00:12:04.679 Unknown (7Ah): Supported 00:12:04.679 00:12:04.679 Error Log 00:12:04.679 ========= 00:12:04.679 00:12:04.679 Arbitration 00:12:04.679 =========== 00:12:04.679 Arbitration Burst: 1 00:12:04.679 00:12:04.679 Power Management 00:12:04.679 ================ 00:12:04.679 Number of Power States: 1 00:12:04.679 Current Power State: Power State #0 00:12:04.679 Power State #0: 00:12:04.679 Max Power: 0.00 W 00:12:04.679 Non-Operational State: Operational 00:12:04.679 Entry Latency: Not Reported 00:12:04.679 Exit Latency: Not Reported 00:12:04.679 Relative Read Throughput: 0 00:12:04.679 Relative Read Latency: 0 00:12:04.679 Relative Write Throughput: 0 00:12:04.679 Relative Write Latency: 0 00:12:04.679 Idle Power: Not Reported 00:12:04.679 Active Power: Not Reported 00:12:04.679 Non-Operational Permissive Mode: Not Supported 00:12:04.679 00:12:04.679 Health Information 00:12:04.679 ================== 00:12:04.679 Critical Warnings: 00:12:04.679 Available Spare Space: OK 00:12:04.679 Temperature: OK 00:12:04.679 Device Reliability: OK 00:12:04.679 Read Only: No 00:12:04.679 Volatile Memory Backup: OK 00:12:04.679 Current Temperature: 0 Kelvin (-2[2024-05-15 16:56:43.508685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:04.940 [2024-05-15 16:56:43.516552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:04.940 [2024-05-15 16:56:43.516579] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:04.940 [2024-05-15 16:56:43.516588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.940 [2024-05-15 16:56:43.516595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.940 [2024-05-15 16:56:43.516601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.940 [2024-05-15 16:56:43.516610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.940 [2024-05-15 16:56:43.516660] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:04.940 [2024-05-15 16:56:43.516670] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:04.940 [2024-05-15 16:56:43.517659] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:04.940 [2024-05-15 16:56:43.517707] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:04.940 [2024-05-15 16:56:43.517713] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:04.940 [2024-05-15 16:56:43.518671] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:04.940 [2024-05-15 16:56:43.518682] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:04.940 [2024-05-15 16:56:43.518732] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:04.940 [2024-05-15 16:56:43.520111] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:04.940 73 Celsius) 00:12:04.940 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:04.940 Available Spare: 0% 00:12:04.940 Available Spare Threshold: 0% 00:12:04.940 Life Percentage Used: 0% 00:12:04.940 Data Units Read: 0 00:12:04.940 Data Units Written: 0 00:12:04.940 Host Read Commands: 0 00:12:04.940 Host Write Commands: 0 00:12:04.940 Controller Busy Time: 0 minutes 00:12:04.940 Power Cycles: 0 00:12:04.940 Power On Hours: 0 hours 00:12:04.940 Unsafe Shutdowns: 0 00:12:04.940 Unrecoverable Media Errors: 0 00:12:04.940 Lifetime Error Log Entries: 0 00:12:04.940 Warning Temperature Time: 0 minutes 00:12:04.940 Critical Temperature Time: 0 minutes 00:12:04.940 00:12:04.940 Number of Queues 00:12:04.940 ================ 00:12:04.940 Number of I/O Submission Queues: 127 00:12:04.940 Number of I/O Completion Queues: 127 00:12:04.940 00:12:04.940 Active Namespaces 00:12:04.940 ================= 00:12:04.940 Namespace ID:1 00:12:04.940 Error Recovery Timeout: Unlimited 00:12:04.940 Command Set Identifier: NVM (00h) 00:12:04.940 Deallocate: Supported 00:12:04.940 Deallocated/Unwritten Error: Not Supported 00:12:04.940 Deallocated Read Value: Unknown 00:12:04.940 Deallocate in Write Zeroes: Not Supported 00:12:04.940 Deallocated Guard Field: 0xFFFF 00:12:04.940 Flush: Supported 00:12:04.940 Reservation: Supported 00:12:04.940 Namespace Sharing Capabilities: Multiple Controllers 00:12:04.940 Size (in LBAs): 131072 (0GiB) 00:12:04.940 Capacity (in LBAs): 131072 (0GiB) 00:12:04.940 Utilization (in LBAs): 131072 (0GiB) 00:12:04.940 NGUID: 758C06AEB0AA4C46B153A43A3B50FDE0 00:12:04.940 UUID: 758c06ae-b0aa-4c46-b153-a43a3b50fde0 00:12:04.940 Thin Provisioning: Not Supported 00:12:04.940 Per-NS Atomic Units: Yes 00:12:04.940 Atomic Boundary Size (Normal): 0 00:12:04.940 Atomic Boundary Size (PFail): 0 00:12:04.940 Atomic Boundary Offset: 0 00:12:04.940 Maximum Single Source Range Length: 65535 00:12:04.940 Maximum Copy Length: 65535 00:12:04.940 Maximum Source Range Count: 1 00:12:04.940 NGUID/EUI64 Never Reused: No 00:12:04.940 Namespace Write Protected: No 00:12:04.940 Number of LBA Formats: 1 00:12:04.940 Current LBA Format: LBA Format #00 00:12:04.940 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:04.940 00:12:04.940 16:56:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:04.940 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.940 [2024-05-15 16:56:43.703558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:10.220 Initializing NVMe Controllers 00:12:10.220 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:10.220 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:10.220 Initialization complete. Launching workers. 00:12:10.220 ======================================================== 00:12:10.220 Latency(us) 00:12:10.220 Device Information : IOPS MiB/s Average min max 00:12:10.220 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40012.39 156.30 3198.87 830.01 6822.65 00:12:10.220 ======================================================== 00:12:10.220 Total : 40012.39 156.30 3198.87 830.01 6822.65 00:12:10.220 00:12:10.220 [2024-05-15 16:56:48.808724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:10.220 16:56:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:10.220 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.220 [2024-05-15 16:56:48.988262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:15.502 Initializing NVMe Controllers 00:12:15.502 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:15.502 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:15.502 Initialization complete. Launching workers. 00:12:15.502 ======================================================== 00:12:15.502 Latency(us) 00:12:15.502 Device Information : IOPS MiB/s Average min max 00:12:15.502 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36080.85 140.94 3547.34 1097.21 8030.92 00:12:15.502 ======================================================== 00:12:15.502 Total : 36080.85 140.94 3547.34 1097.21 8030.92 00:12:15.502 00:12:15.502 [2024-05-15 16:56:54.009009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:15.502 16:56:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:15.502 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.502 [2024-05-15 16:56:54.197145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:20.784 [2024-05-15 16:56:59.332631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:20.784 Initializing NVMe Controllers 00:12:20.784 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:20.784 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:20.784 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:20.784 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:20.784 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:20.784 Initialization complete. Launching workers. 00:12:20.784 Starting thread on core 2 00:12:20.784 Starting thread on core 3 00:12:20.784 Starting thread on core 1 00:12:20.784 16:56:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:20.784 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.784 [2024-05-15 16:56:59.586969] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:24.086 [2024-05-15 16:57:02.726748] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:24.086 Initializing NVMe Controllers 00:12:24.086 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:24.086 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:24.086 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:24.086 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:24.086 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:24.086 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:24.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:24.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:24.086 Initialization complete. Launching workers. 00:12:24.086 Starting thread on core 1 with urgent priority queue 00:12:24.086 Starting thread on core 2 with urgent priority queue 00:12:24.086 Starting thread on core 3 with urgent priority queue 00:12:24.086 Starting thread on core 0 with urgent priority queue 00:12:24.086 SPDK bdev Controller (SPDK2 ) core 0: 4496.33 IO/s 22.24 secs/100000 ios 00:12:24.086 SPDK bdev Controller (SPDK2 ) core 1: 5479.67 IO/s 18.25 secs/100000 ios 00:12:24.086 SPDK bdev Controller (SPDK2 ) core 2: 4845.67 IO/s 20.64 secs/100000 ios 00:12:24.086 SPDK bdev Controller (SPDK2 ) core 3: 3259.33 IO/s 30.68 secs/100000 ios 00:12:24.086 ======================================================== 00:12:24.086 00:12:24.086 16:57:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:24.086 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.347 [2024-05-15 16:57:02.986980] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:24.347 Initializing NVMe Controllers 00:12:24.347 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:24.347 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:24.347 Namespace ID: 1 size: 0GB 00:12:24.347 Initialization complete. 00:12:24.347 INFO: using host memory buffer for IO 00:12:24.347 Hello world! 00:12:24.347 [2024-05-15 16:57:02.998053] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:24.347 16:57:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:24.347 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.608 [2024-05-15 16:57:03.254513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:25.550 Initializing NVMe Controllers 00:12:25.550 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:25.550 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:25.550 Initialization complete. Launching workers. 00:12:25.550 submit (in ns) avg, min, max = 8064.5, 3952.5, 4007754.2 00:12:25.550 complete (in ns) avg, min, max = 17202.5, 2370.8, 4005376.7 00:12:25.550 00:12:25.550 Submit histogram 00:12:25.550 ================ 00:12:25.550 Range in us Cumulative Count 00:12:25.550 3.947 - 3.973: 1.2279% ( 236) 00:12:25.550 3.973 - 4.000: 7.5130% ( 1208) 00:12:25.550 4.000 - 4.027: 16.8106% ( 1787) 00:12:25.550 4.027 - 4.053: 28.4339% ( 2234) 00:12:25.550 4.053 - 4.080: 38.6681% ( 1967) 00:12:25.550 4.080 - 4.107: 48.7305% ( 1934) 00:12:25.550 4.107 - 4.133: 63.8814% ( 2912) 00:12:25.550 4.133 - 4.160: 79.3913% ( 2981) 00:12:25.550 4.160 - 4.187: 91.1863% ( 2267) 00:12:25.550 4.187 - 4.213: 96.6129% ( 1043) 00:12:25.550 4.213 - 4.240: 98.5068% ( 364) 00:12:25.550 4.240 - 4.267: 99.1779% ( 129) 00:12:25.550 4.267 - 4.293: 99.3704% ( 37) 00:12:25.550 4.293 - 4.320: 99.4277% ( 11) 00:12:25.550 4.320 - 4.347: 99.4537% ( 5) 00:12:25.550 4.347 - 4.373: 99.4589% ( 1) 00:12:25.550 4.373 - 4.400: 99.4641% ( 1) 00:12:25.550 4.507 - 4.533: 99.4693% ( 1) 00:12:25.550 4.720 - 4.747: 99.4745% ( 1) 00:12:25.550 4.773 - 4.800: 99.4797% ( 1) 00:12:25.550 4.827 - 4.853: 99.4849% ( 1) 00:12:25.550 4.880 - 4.907: 99.4901% ( 1) 00:12:25.550 4.933 - 4.960: 99.4953% ( 1) 00:12:25.550 4.960 - 4.987: 99.5005% ( 1) 00:12:25.550 5.093 - 5.120: 99.5057% ( 1) 00:12:25.550 5.333 - 5.360: 99.5109% ( 1) 00:12:25.550 5.573 - 5.600: 99.5161% ( 1) 00:12:25.550 5.627 - 5.653: 99.5213% ( 1) 00:12:25.550 5.893 - 5.920: 99.5265% ( 1) 00:12:25.550 6.053 - 6.080: 99.5369% ( 2) 00:12:25.550 6.080 - 6.107: 99.5473% ( 2) 00:12:25.550 6.107 - 6.133: 99.5682% ( 4) 00:12:25.550 6.133 - 6.160: 99.5734% ( 1) 00:12:25.550 6.160 - 6.187: 99.5786% ( 1) 00:12:25.550 6.213 - 6.240: 99.5838% ( 1) 00:12:25.550 6.373 - 6.400: 99.5890% ( 1) 00:12:25.550 6.800 - 6.827: 99.5994% ( 2) 00:12:25.550 6.827 - 6.880: 99.6098% ( 2) 00:12:25.550 6.933 - 6.987: 99.6150% ( 1) 00:12:25.550 6.987 - 7.040: 99.6202% ( 1) 00:12:25.550 7.040 - 7.093: 99.6254% ( 1) 00:12:25.550 7.093 - 7.147: 99.6358% ( 2) 00:12:25.550 7.147 - 7.200: 99.6410% ( 1) 00:12:25.550 7.200 - 7.253: 99.6618% ( 4) 00:12:25.551 7.253 - 7.307: 99.6670% ( 1) 00:12:25.551 7.307 - 7.360: 99.6722% ( 1) 00:12:25.551 7.413 - 7.467: 99.6826% ( 2) 00:12:25.551 7.467 - 7.520: 99.6878% ( 1) 00:12:25.551 7.573 - 7.627: 99.6982% ( 2) 00:12:25.551 7.627 - 7.680: 99.7034% ( 1) 00:12:25.551 7.680 - 7.733: 99.7190% ( 3) 00:12:25.551 7.733 - 7.787: 99.7294% ( 2) 00:12:25.551 7.787 - 7.840: 99.7503% ( 4) 00:12:25.551 7.840 - 7.893: 99.7659% ( 3) 00:12:25.551 7.893 - 7.947: 99.7763% ( 2) 00:12:25.551 8.053 - 8.107: 99.7815% ( 1) 00:12:25.551 8.107 - 8.160: 99.7867% ( 1) 00:12:25.551 8.213 - 8.267: 99.7919% ( 1) 00:12:25.551 8.267 - 8.320: 99.8075% ( 3) 00:12:25.551 8.320 - 8.373: 99.8127% ( 1) 00:12:25.551 8.427 - 8.480: 99.8179% ( 1) 00:12:25.551 8.587 - 8.640: 99.8231% ( 1) 00:12:25.551 8.640 - 8.693: 99.8439% ( 4) 00:12:25.551 8.747 - 8.800: 99.8491% ( 1) 00:12:25.551 8.960 - 9.013: 99.8543% ( 1) 00:12:25.551 9.067 - 9.120: 99.8595% ( 1) 00:12:25.551 9.173 - 9.227: 99.8647% ( 1) 00:12:25.551 9.280 - 9.333: 99.8803% ( 3) 00:12:25.551 9.333 - 9.387: 99.8855% ( 1) 00:12:25.551 9.440 - 9.493: 99.8959% ( 2) 00:12:25.551 10.827 - 10.880: 99.9011% ( 1) 00:12:25.551 3986.773 - 4014.080: 100.0000% ( 19) 00:12:25.551 00:12:25.551 Complete histogram 00:12:25.551 ================== 00:12:25.551 Range in us Cumulative Count 00:12:25.551 2.360 - 2.373: 0.0052% ( 1) 00:12:25.551 2.387 - 2.400: 0.6035% ( 115) 00:12:25.551 2.400 - 2.413: 1.3580% ( 145) 00:12:25.551 2.413 - 2.427: 1.4880% ( 25) 00:12:25.551 2.427 - 2.440: 1.7222% ( 45) 00:12:25.551 2.440 - [2024-05-15 16:57:04.346321] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:25.812 2.453: 1.7586% ( 7) 00:12:25.812 2.453 - 2.467: 16.4048% ( 2815) 00:12:25.812 2.467 - 2.480: 51.3684% ( 6720) 00:12:25.812 2.480 - 2.493: 60.0832% ( 1675) 00:12:25.812 2.493 - 2.507: 71.8783% ( 2267) 00:12:25.812 2.507 - 2.520: 79.0427% ( 1377) 00:12:25.812 2.520 - 2.533: 82.2997% ( 626) 00:12:25.812 2.533 - 2.547: 86.6805% ( 842) 00:12:25.812 2.547 - 2.560: 91.9199% ( 1007) 00:12:25.812 2.560 - 2.573: 95.5359% ( 695) 00:12:25.812 2.573 - 2.587: 97.5806% ( 393) 00:12:25.812 2.587 - 2.600: 98.7357% ( 222) 00:12:25.812 2.600 - 2.613: 99.2196% ( 93) 00:12:25.812 2.613 - 2.627: 99.3392% ( 23) 00:12:25.812 2.627 - 2.640: 99.3652% ( 5) 00:12:25.812 4.907 - 4.933: 99.3704% ( 1) 00:12:25.812 5.173 - 5.200: 99.3757% ( 1) 00:12:25.812 5.200 - 5.227: 99.3809% ( 1) 00:12:25.812 5.253 - 5.280: 99.3861% ( 1) 00:12:25.812 5.307 - 5.333: 99.3913% ( 1) 00:12:25.812 5.333 - 5.360: 99.3965% ( 1) 00:12:25.812 5.360 - 5.387: 99.4121% ( 3) 00:12:25.812 5.547 - 5.573: 99.4173% ( 1) 00:12:25.812 5.627 - 5.653: 99.4225% ( 1) 00:12:25.812 5.680 - 5.707: 99.4329% ( 2) 00:12:25.812 5.813 - 5.840: 99.4381% ( 1) 00:12:25.812 5.867 - 5.893: 99.4485% ( 2) 00:12:25.812 6.000 - 6.027: 99.4589% ( 2) 00:12:25.812 6.027 - 6.053: 99.4641% ( 1) 00:12:25.812 6.160 - 6.187: 99.4693% ( 1) 00:12:25.812 6.187 - 6.213: 99.4745% ( 1) 00:12:25.812 6.267 - 6.293: 99.4849% ( 2) 00:12:25.812 6.293 - 6.320: 99.4901% ( 1) 00:12:25.812 6.320 - 6.347: 99.4953% ( 1) 00:12:25.812 6.373 - 6.400: 99.5005% ( 1) 00:12:25.812 6.400 - 6.427: 99.5057% ( 1) 00:12:25.812 6.427 - 6.453: 99.5161% ( 2) 00:12:25.812 6.453 - 6.480: 99.5265% ( 2) 00:12:25.812 6.480 - 6.507: 99.5317% ( 1) 00:12:25.812 6.507 - 6.533: 99.5369% ( 1) 00:12:25.812 6.613 - 6.640: 99.5473% ( 2) 00:12:25.812 6.720 - 6.747: 99.5525% ( 1) 00:12:25.812 6.773 - 6.800: 99.5578% ( 1) 00:12:25.812 6.827 - 6.880: 99.5630% ( 1) 00:12:25.812 6.933 - 6.987: 99.5786% ( 3) 00:12:25.812 7.093 - 7.147: 99.5838% ( 1) 00:12:25.812 7.147 - 7.200: 99.5890% ( 1) 00:12:25.812 7.307 - 7.360: 99.5942% ( 1) 00:12:25.812 7.893 - 7.947: 99.5994% ( 1) 00:12:25.812 8.160 - 8.213: 99.6046% ( 1) 00:12:25.812 8.533 - 8.587: 99.6098% ( 1) 00:12:25.812 12.000 - 12.053: 99.6150% ( 1) 00:12:25.812 13.120 - 13.173: 99.6202% ( 1) 00:12:25.812 13.973 - 14.080: 99.6254% ( 1) 00:12:25.812 39.680 - 39.893: 99.6306% ( 1) 00:12:25.812 3372.373 - 3386.027: 99.6410% ( 2) 00:12:25.812 3986.773 - 4014.080: 100.0000% ( 69) 00:12:25.812 00:12:25.812 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:25.812 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:25.812 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:25.812 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:25.812 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:25.812 [ 00:12:25.812 { 00:12:25.812 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:25.812 "subtype": "Discovery", 00:12:25.812 "listen_addresses": [], 00:12:25.812 "allow_any_host": true, 00:12:25.812 "hosts": [] 00:12:25.812 }, 00:12:25.812 { 00:12:25.812 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:25.812 "subtype": "NVMe", 00:12:25.812 "listen_addresses": [ 00:12:25.812 { 00:12:25.812 "trtype": "VFIOUSER", 00:12:25.812 "adrfam": "IPv4", 00:12:25.812 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:25.812 "trsvcid": "0" 00:12:25.812 } 00:12:25.812 ], 00:12:25.812 "allow_any_host": true, 00:12:25.812 "hosts": [], 00:12:25.812 "serial_number": "SPDK1", 00:12:25.812 "model_number": "SPDK bdev Controller", 00:12:25.812 "max_namespaces": 32, 00:12:25.812 "min_cntlid": 1, 00:12:25.812 "max_cntlid": 65519, 00:12:25.812 "namespaces": [ 00:12:25.812 { 00:12:25.812 "nsid": 1, 00:12:25.812 "bdev_name": "Malloc1", 00:12:25.812 "name": "Malloc1", 00:12:25.812 "nguid": "777D4668D8E541F8BDF5369A54EB4D71", 00:12:25.813 "uuid": "777d4668-d8e5-41f8-bdf5-369a54eb4d71" 00:12:25.813 }, 00:12:25.813 { 00:12:25.813 "nsid": 2, 00:12:25.813 "bdev_name": "Malloc3", 00:12:25.813 "name": "Malloc3", 00:12:25.813 "nguid": "B2DF97F5479D471E8FA7C175F68076E0", 00:12:25.813 "uuid": "b2df97f5-479d-471e-8fa7-c175f68076e0" 00:12:25.813 } 00:12:25.813 ] 00:12:25.813 }, 00:12:25.813 { 00:12:25.813 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:25.813 "subtype": "NVMe", 00:12:25.813 "listen_addresses": [ 00:12:25.813 { 00:12:25.813 "trtype": "VFIOUSER", 00:12:25.813 "adrfam": "IPv4", 00:12:25.813 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:25.813 "trsvcid": "0" 00:12:25.813 } 00:12:25.813 ], 00:12:25.813 "allow_any_host": true, 00:12:25.813 "hosts": [], 00:12:25.813 "serial_number": "SPDK2", 00:12:25.813 "model_number": "SPDK bdev Controller", 00:12:25.813 "max_namespaces": 32, 00:12:25.813 "min_cntlid": 1, 00:12:25.813 "max_cntlid": 65519, 00:12:25.813 "namespaces": [ 00:12:25.813 { 00:12:25.813 "nsid": 1, 00:12:25.813 "bdev_name": "Malloc2", 00:12:25.813 "name": "Malloc2", 00:12:25.813 "nguid": "758C06AEB0AA4C46B153A43A3B50FDE0", 00:12:25.813 "uuid": "758c06ae-b0aa-4c46-b153-a43a3b50fde0" 00:12:25.813 } 00:12:25.813 ] 00:12:25.813 } 00:12:25.813 ] 00:12:25.813 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:25.813 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1375428 00:12:25.813 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:25.813 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:25.813 16:57:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:12:25.813 16:57:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:25.813 16:57:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:25.813 16:57:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:12:25.813 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:25.813 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:25.813 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.072 [2024-05-15 16:57:04.721957] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:26.072 Malloc4 00:12:26.072 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:26.072 [2024-05-15 16:57:04.892112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:26.333 16:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:26.333 Asynchronous Event Request test 00:12:26.333 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:26.333 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:26.333 Registering asynchronous event callbacks... 00:12:26.333 Starting namespace attribute notice tests for all controllers... 00:12:26.333 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:26.333 aer_cb - Changed Namespace 00:12:26.333 Cleaning up... 00:12:26.333 [ 00:12:26.333 { 00:12:26.333 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:26.333 "subtype": "Discovery", 00:12:26.333 "listen_addresses": [], 00:12:26.333 "allow_any_host": true, 00:12:26.333 "hosts": [] 00:12:26.333 }, 00:12:26.333 { 00:12:26.333 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:26.333 "subtype": "NVMe", 00:12:26.333 "listen_addresses": [ 00:12:26.333 { 00:12:26.333 "trtype": "VFIOUSER", 00:12:26.333 "adrfam": "IPv4", 00:12:26.333 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:26.333 "trsvcid": "0" 00:12:26.333 } 00:12:26.333 ], 00:12:26.333 "allow_any_host": true, 00:12:26.333 "hosts": [], 00:12:26.333 "serial_number": "SPDK1", 00:12:26.333 "model_number": "SPDK bdev Controller", 00:12:26.333 "max_namespaces": 32, 00:12:26.333 "min_cntlid": 1, 00:12:26.333 "max_cntlid": 65519, 00:12:26.333 "namespaces": [ 00:12:26.333 { 00:12:26.333 "nsid": 1, 00:12:26.333 "bdev_name": "Malloc1", 00:12:26.333 "name": "Malloc1", 00:12:26.333 "nguid": "777D4668D8E541F8BDF5369A54EB4D71", 00:12:26.333 "uuid": "777d4668-d8e5-41f8-bdf5-369a54eb4d71" 00:12:26.333 }, 00:12:26.333 { 00:12:26.333 "nsid": 2, 00:12:26.333 "bdev_name": "Malloc3", 00:12:26.333 "name": "Malloc3", 00:12:26.333 "nguid": "B2DF97F5479D471E8FA7C175F68076E0", 00:12:26.333 "uuid": "b2df97f5-479d-471e-8fa7-c175f68076e0" 00:12:26.333 } 00:12:26.333 ] 00:12:26.333 }, 00:12:26.333 { 00:12:26.333 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:26.333 "subtype": "NVMe", 00:12:26.333 "listen_addresses": [ 00:12:26.333 { 00:12:26.333 "trtype": "VFIOUSER", 00:12:26.333 "adrfam": "IPv4", 00:12:26.333 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:26.333 "trsvcid": "0" 00:12:26.333 } 00:12:26.333 ], 00:12:26.333 "allow_any_host": true, 00:12:26.333 "hosts": [], 00:12:26.333 "serial_number": "SPDK2", 00:12:26.333 "model_number": "SPDK bdev Controller", 00:12:26.333 "max_namespaces": 32, 00:12:26.333 "min_cntlid": 1, 00:12:26.333 "max_cntlid": 65519, 00:12:26.333 "namespaces": [ 00:12:26.333 { 00:12:26.333 "nsid": 1, 00:12:26.333 "bdev_name": "Malloc2", 00:12:26.333 "name": "Malloc2", 00:12:26.333 "nguid": "758C06AEB0AA4C46B153A43A3B50FDE0", 00:12:26.333 "uuid": "758c06ae-b0aa-4c46-b153-a43a3b50fde0" 00:12:26.333 }, 00:12:26.333 { 00:12:26.333 "nsid": 2, 00:12:26.333 "bdev_name": "Malloc4", 00:12:26.333 "name": "Malloc4", 00:12:26.333 "nguid": "FFB745004B3D48309E9C9C3A72E91292", 00:12:26.333 "uuid": "ffb74500-4b3d-4830-9e9c-9c3a72e91292" 00:12:26.333 } 00:12:26.333 ] 00:12:26.333 } 00:12:26.333 ] 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1375428 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1366352 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1366352 ']' 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1366352 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1366352 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1366352' 00:12:26.333 killing process with pid 1366352 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1366352 00:12:26.333 [2024-05-15 16:57:05.147372] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:26.333 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1366352 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1375668 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1375668' 00:12:26.594 Process pid: 1375668 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1375668 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1375668 ']' 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.594 16:57:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:26.594 [2024-05-15 16:57:05.382371] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:26.594 [2024-05-15 16:57:05.383325] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:12:26.594 [2024-05-15 16:57:05.383368] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.594 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.854 [2024-05-15 16:57:05.442838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.854 [2024-05-15 16:57:05.508598] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.854 [2024-05-15 16:57:05.508639] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.854 [2024-05-15 16:57:05.508652] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.854 [2024-05-15 16:57:05.508662] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.854 [2024-05-15 16:57:05.508669] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.854 [2024-05-15 16:57:05.508806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.854 [2024-05-15 16:57:05.508920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.854 [2024-05-15 16:57:05.509075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.854 [2024-05-15 16:57:05.509076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.854 [2024-05-15 16:57:05.572994] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:26.854 [2024-05-15 16:57:05.573061] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:26.854 [2024-05-15 16:57:05.574114] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:26.854 [2024-05-15 16:57:05.574691] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:26.854 [2024-05-15 16:57:05.574777] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:27.426 16:57:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:27.426 16:57:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:12:27.426 16:57:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:28.368 16:57:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:28.628 16:57:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:28.628 16:57:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:28.628 16:57:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:28.628 16:57:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:28.628 16:57:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:28.890 Malloc1 00:12:28.890 16:57:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:28.890 16:57:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:29.151 16:57:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:29.151 [2024-05-15 16:57:07.985498] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:29.411 16:57:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:29.411 16:57:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:29.411 16:57:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:29.411 Malloc2 00:12:29.411 16:57:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:29.670 16:57:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:29.931 16:57:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:29.931 16:57:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:29.931 16:57:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1375668 00:12:29.931 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1375668 ']' 00:12:29.931 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1375668 00:12:29.931 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:12:29.931 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:29.931 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1375668 00:12:30.191 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:30.191 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:30.191 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1375668' 00:12:30.191 killing process with pid 1375668 00:12:30.191 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1375668 00:12:30.191 [2024-05-15 16:57:08.768397] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:30.191 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1375668 00:12:30.191 16:57:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:30.191 16:57:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:30.191 00:12:30.191 real 0m50.580s 00:12:30.191 user 3m20.546s 00:12:30.191 sys 0m3.024s 00:12:30.191 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:30.191 16:57:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:30.191 ************************************ 00:12:30.191 END TEST nvmf_vfio_user 00:12:30.191 ************************************ 00:12:30.191 16:57:08 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:30.191 16:57:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:30.191 16:57:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:30.191 16:57:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:30.191 ************************************ 00:12:30.191 START TEST nvmf_vfio_user_nvme_compliance 00:12:30.191 ************************************ 00:12:30.192 16:57:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:30.453 * Looking for test storage... 00:12:30.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1376926 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1376926' 00:12:30.453 Process pid: 1376926 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1376926 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 1376926 ']' 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:30.453 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:30.453 [2024-05-15 16:57:09.157716] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:12:30.453 [2024-05-15 16:57:09.157791] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.453 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.453 [2024-05-15 16:57:09.224183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:30.714 [2024-05-15 16:57:09.299426] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.714 [2024-05-15 16:57:09.299464] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.714 [2024-05-15 16:57:09.299472] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.714 [2024-05-15 16:57:09.299479] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.714 [2024-05-15 16:57:09.299484] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.714 [2024-05-15 16:57:09.299588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.714 [2024-05-15 16:57:09.299659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.714 [2024-05-15 16:57:09.299663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.284 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:31.284 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:12:31.284 16:57:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:32.225 malloc0 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.225 16:57:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:32.225 16:57:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.225 16:57:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:32.225 16:57:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.225 16:57:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:32.225 16:57:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.225 16:57:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:32.225 16:57:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.225 16:57:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:32.225 [2024-05-15 16:57:11.024086] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:32.225 16:57:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.225 16:57:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:32.485 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.485 00:12:32.485 00:12:32.485 CUnit - A unit testing framework for C - Version 2.1-3 00:12:32.485 http://cunit.sourceforge.net/ 00:12:32.485 00:12:32.485 00:12:32.485 Suite: nvme_compliance 00:12:32.485 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 16:57:11.194045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.485 [2024-05-15 16:57:11.195398] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:32.485 [2024-05-15 16:57:11.195410] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:32.485 [2024-05-15 16:57:11.195414] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:32.486 [2024-05-15 16:57:11.197068] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.486 passed 00:12:32.486 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 16:57:11.291656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.486 [2024-05-15 16:57:11.294673] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.745 passed 00:12:32.745 Test: admin_identify_ns ...[2024-05-15 16:57:11.389794] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.745 [2024-05-15 16:57:11.447596] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:32.745 [2024-05-15 16:57:11.457557] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:32.745 [2024-05-15 16:57:11.478673] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.745 passed 00:12:32.745 Test: admin_get_features_mandatory_features ...[2024-05-15 16:57:11.572661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.745 [2024-05-15 16:57:11.575678] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:33.005 passed 00:12:33.005 Test: admin_get_features_optional_features ...[2024-05-15 16:57:11.669225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:33.005 [2024-05-15 16:57:11.673249] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:33.005 passed 00:12:33.005 Test: admin_set_features_number_of_queues ...[2024-05-15 16:57:11.765428] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:33.265 [2024-05-15 16:57:11.870658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:33.265 passed 00:12:33.265 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 16:57:11.963325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:33.265 [2024-05-15 16:57:11.966349] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:33.265 passed 00:12:33.265 Test: admin_get_log_page_with_lpo ...[2024-05-15 16:57:12.059483] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:33.525 [2024-05-15 16:57:12.124557] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:33.525 [2024-05-15 16:57:12.137600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:33.525 passed 00:12:33.525 Test: fabric_property_get ...[2024-05-15 16:57:12.231671] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:33.525 [2024-05-15 16:57:12.232899] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:33.525 [2024-05-15 16:57:12.234687] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:33.525 passed 00:12:33.525 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 16:57:12.328229] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:33.525 [2024-05-15 16:57:12.329468] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:33.525 [2024-05-15 16:57:12.331250] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:33.785 passed 00:12:33.785 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 16:57:12.424379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:33.785 [2024-05-15 16:57:12.508556] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:33.785 [2024-05-15 16:57:12.524554] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:33.785 [2024-05-15 16:57:12.529634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:33.785 passed 00:12:34.045 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 16:57:12.621213] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:34.045 [2024-05-15 16:57:12.622435] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:34.045 [2024-05-15 16:57:12.624233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:34.045 passed 00:12:34.045 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 16:57:12.715353] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:34.045 [2024-05-15 16:57:12.794551] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:34.045 [2024-05-15 16:57:12.818550] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:34.045 [2024-05-15 16:57:12.823629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:34.045 passed 00:12:34.303 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 16:57:12.913217] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:34.303 [2024-05-15 16:57:12.914452] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:34.303 [2024-05-15 16:57:12.914472] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:34.303 [2024-05-15 16:57:12.916230] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:34.303 passed 00:12:34.303 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 16:57:13.009336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:34.303 [2024-05-15 16:57:13.100554] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:34.304 [2024-05-15 16:57:13.108553] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:34.304 [2024-05-15 16:57:13.116554] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:34.304 [2024-05-15 16:57:13.124556] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:34.563 [2024-05-15 16:57:13.153643] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:34.563 passed 00:12:34.563 Test: admin_create_io_sq_verify_pc ...[2024-05-15 16:57:13.247246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:34.563 [2024-05-15 16:57:13.261560] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:34.563 [2024-05-15 16:57:13.279398] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:34.563 passed 00:12:34.563 Test: admin_create_io_qp_max_qps ...[2024-05-15 16:57:13.374935] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:35.943 [2024-05-15 16:57:14.482554] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:36.202 [2024-05-15 16:57:14.861046] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.203 passed 00:12:36.203 Test: admin_create_io_sq_shared_cq ...[2024-05-15 16:57:14.954395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:36.463 [2024-05-15 16:57:15.085564] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:36.463 [2024-05-15 16:57:15.122618] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:36.463 passed 00:12:36.463 00:12:36.463 Run Summary: Type Total Ran Passed Failed Inactive 00:12:36.463 suites 1 1 n/a 0 0 00:12:36.463 tests 18 18 18 0 0 00:12:36.463 asserts 360 360 360 0 n/a 00:12:36.463 00:12:36.463 Elapsed time = 1.647 seconds 00:12:36.463 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1376926 00:12:36.463 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 1376926 ']' 00:12:36.463 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 1376926 00:12:36.464 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:12:36.464 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:36.464 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1376926 00:12:36.464 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:36.464 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:36.464 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1376926' 00:12:36.464 killing process with pid 1376926 00:12:36.464 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 1376926 00:12:36.464 [2024-05-15 16:57:15.227542] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:36.464 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 1376926 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:36.726 00:12:36.726 real 0m6.400s 00:12:36.726 user 0m18.299s 00:12:36.726 sys 0m0.467s 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:36.726 ************************************ 00:12:36.726 END TEST nvmf_vfio_user_nvme_compliance 00:12:36.726 ************************************ 00:12:36.726 16:57:15 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:36.726 16:57:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:36.726 16:57:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:36.726 16:57:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:36.726 ************************************ 00:12:36.726 START TEST nvmf_vfio_user_fuzz 00:12:36.726 ************************************ 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:36.726 * Looking for test storage... 00:12:36.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1378227 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1378227' 00:12:36.726 Process pid: 1378227 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1378227 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1378227 ']' 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:36.726 16:57:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:37.765 16:57:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:37.765 16:57:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:12:37.765 16:57:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:38.707 malloc0 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:38.707 16:57:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:10.822 Fuzzing completed. Shutting down the fuzz application 00:13:10.822 00:13:10.822 Dumping successful admin opcodes: 00:13:10.822 8, 9, 10, 24, 00:13:10.822 Dumping successful io opcodes: 00:13:10.822 0, 00:13:10.822 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1081908, total successful commands: 4263, random_seed: 4058654528 00:13:10.822 NS: 0x200003a1ef00 admin qp, Total commands completed: 136058, total successful commands: 1106, random_seed: 1697084096 00:13:10.822 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:10.822 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.822 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.822 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.822 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1378227 00:13:10.822 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1378227 ']' 00:13:10.822 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 1378227 00:13:10.822 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:13:10.822 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:10.823 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1378227 00:13:10.823 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:10.823 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:10.823 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1378227' 00:13:10.823 killing process with pid 1378227 00:13:10.823 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 1378227 00:13:10.823 16:57:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 1378227 00:13:10.823 16:57:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:10.823 16:57:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:10.823 00:13:10.823 real 0m33.673s 00:13:10.823 user 0m38.032s 00:13:10.823 sys 0m24.205s 00:13:10.823 16:57:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.823 16:57:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.823 ************************************ 00:13:10.823 END TEST nvmf_vfio_user_fuzz 00:13:10.823 ************************************ 00:13:10.823 16:57:49 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:10.823 16:57:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:10.823 16:57:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.823 16:57:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:10.823 ************************************ 00:13:10.823 START TEST nvmf_host_management 00:13:10.823 ************************************ 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:10.823 * Looking for test storage... 00:13:10.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:10.823 16:57:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.414 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:17.414 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:17.415 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:17.415 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:17.415 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.415 16:57:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:13:17.415 00:13:17.415 --- 10.0.0.2 ping statistics --- 00:13:17.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.415 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:13:17.415 00:13:17.415 --- 10.0.0.1 ping statistics --- 00:13:17.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.415 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1388205 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1388205 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1388205 ']' 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:17.415 16:57:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:17.675 [2024-05-15 16:57:56.278503] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:13:17.676 [2024-05-15 16:57:56.278555] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.676 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.676 [2024-05-15 16:57:56.360635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.676 [2024-05-15 16:57:56.426710] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.676 [2024-05-15 16:57:56.426747] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.676 [2024-05-15 16:57:56.426754] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.676 [2024-05-15 16:57:56.426761] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.676 [2024-05-15 16:57:56.426766] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.676 [2024-05-15 16:57:56.426872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.676 [2024-05-15 16:57:56.427026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.676 [2024-05-15 16:57:56.427182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.676 [2024-05-15 16:57:56.427184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:18.618 [2024-05-15 16:57:57.136171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:18.618 Malloc0 00:13:18.618 [2024-05-15 16:57:57.199291] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:18.618 [2024-05-15 16:57:57.199521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1388571 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1388571 /var/tmp/bdevperf.sock 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1388571 ']' 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:18.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:18.618 { 00:13:18.618 "params": { 00:13:18.618 "name": "Nvme$subsystem", 00:13:18.618 "trtype": "$TEST_TRANSPORT", 00:13:18.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.618 "adrfam": "ipv4", 00:13:18.618 "trsvcid": "$NVMF_PORT", 00:13:18.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.618 "hdgst": ${hdgst:-false}, 00:13:18.618 "ddgst": ${ddgst:-false} 00:13:18.618 }, 00:13:18.618 "method": "bdev_nvme_attach_controller" 00:13:18.618 } 00:13:18.618 EOF 00:13:18.618 )") 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:18.618 16:57:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:18.618 "params": { 00:13:18.618 "name": "Nvme0", 00:13:18.618 "trtype": "tcp", 00:13:18.618 "traddr": "10.0.0.2", 00:13:18.618 "adrfam": "ipv4", 00:13:18.619 "trsvcid": "4420", 00:13:18.619 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:18.619 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:18.619 "hdgst": false, 00:13:18.619 "ddgst": false 00:13:18.619 }, 00:13:18.619 "method": "bdev_nvme_attach_controller" 00:13:18.619 }' 00:13:18.619 [2024-05-15 16:57:57.297358] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:13:18.619 [2024-05-15 16:57:57.297406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388571 ] 00:13:18.619 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.619 [2024-05-15 16:57:57.356392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.619 [2024-05-15 16:57:57.420717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.880 Running I/O for 10 seconds... 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=583 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 583 -ge 100 ']' 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.454 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 [2024-05-15 16:57:58.158579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.454 [2024-05-15 16:57:58.158959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.158965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.158972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.158978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.158985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.158991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.158998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.159004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.159013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.159020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.159027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.159034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.159040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.159046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce53b0 is same with the state(5) to be set 00:13:19.455 [2024-05-15 16:57:58.159731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.159983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.159994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.455 [2024-05-15 16:57:58.160329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.455 [2024-05-15 16:57:58.160337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:19.456 [2024-05-15 16:57:58.160959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.456 [2024-05-15 16:57:58.160968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563100 is same with the state(5) to be set 00:13:19.456 [2024-05-15 16:57:58.161011] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1563100 was disconnected and freed. reset controller. 00:13:19.456 [2024-05-15 16:57:58.162230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:19.456 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.456 task offset: 82432 on job bdev=Nvme0n1 fails 00:13:19.456 00:13:19.456 Latency(us) 00:13:19.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.456 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:19.456 Job: Nvme0n1 ended in about 0.47 seconds with error 00:13:19.456 Verification LBA range: start 0x0 length 0x400 00:13:19.456 Nvme0n1 : 0.47 1356.80 84.80 134.84 0.00 41739.76 6717.44 34952.53 00:13:19.456 =================================================================================================================== 00:13:19.457 Total : 1356.80 84.80 134.84 0.00 41739.76 6717.44 34952.53 00:13:19.457 [2024-05-15 16:57:58.164253] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:19.457 [2024-05-15 16:57:58.164278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1109750 (9): Bad file descriptor 00:13:19.457 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:19.457 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.457 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:19.457 16:57:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.457 16:57:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:19.457 [2024-05-15 16:57:58.217838] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1388571 00:13:20.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1388571) - No such process 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:20.397 { 00:13:20.397 "params": { 00:13:20.397 "name": "Nvme$subsystem", 00:13:20.397 "trtype": "$TEST_TRANSPORT", 00:13:20.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:20.397 "adrfam": "ipv4", 00:13:20.397 "trsvcid": "$NVMF_PORT", 00:13:20.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:20.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:20.397 "hdgst": ${hdgst:-false}, 00:13:20.397 "ddgst": ${ddgst:-false} 00:13:20.397 }, 00:13:20.397 "method": "bdev_nvme_attach_controller" 00:13:20.397 } 00:13:20.397 EOF 00:13:20.397 )") 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:20.397 16:57:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:20.397 "params": { 00:13:20.397 "name": "Nvme0", 00:13:20.397 "trtype": "tcp", 00:13:20.397 "traddr": "10.0.0.2", 00:13:20.397 "adrfam": "ipv4", 00:13:20.397 "trsvcid": "4420", 00:13:20.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:20.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:20.397 "hdgst": false, 00:13:20.397 "ddgst": false 00:13:20.397 }, 00:13:20.397 "method": "bdev_nvme_attach_controller" 00:13:20.397 }' 00:13:20.656 [2024-05-15 16:57:59.232475] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:13:20.656 [2024-05-15 16:57:59.232530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388917 ] 00:13:20.656 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.656 [2024-05-15 16:57:59.290811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.656 [2024-05-15 16:57:59.354274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.916 Running I/O for 1 seconds... 00:13:21.856 00:13:21.856 Latency(us) 00:13:21.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.856 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:21.856 Verification LBA range: start 0x0 length 0x400 00:13:21.856 Nvme0n1 : 1.03 1613.90 100.87 0.00 0.00 38973.85 6280.53 32986.45 00:13:21.856 =================================================================================================================== 00:13:21.856 Total : 1613.90 100.87 0.00 0.00 38973.85 6280.53 32986.45 00:13:21.856 16:58:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:21.856 16:58:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:21.856 16:58:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:21.856 16:58:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:21.856 16:58:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:21.856 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:21.856 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:21.856 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.856 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:21.856 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.856 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.856 rmmod nvme_tcp 00:13:22.117 rmmod nvme_fabrics 00:13:22.117 rmmod nvme_keyring 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1388205 ']' 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1388205 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 1388205 ']' 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 1388205 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1388205 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1388205' 00:13:22.117 killing process with pid 1388205 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 1388205 00:13:22.117 [2024-05-15 16:58:00.817026] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 1388205 00:13:22.117 [2024-05-15 16:58:00.921337] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.117 16:58:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.659 16:58:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:24.659 16:58:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:24.659 00:13:24.659 real 0m13.878s 00:13:24.659 user 0m22.580s 00:13:24.659 sys 0m6.047s 00:13:24.659 16:58:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:24.659 16:58:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:24.659 ************************************ 00:13:24.659 END TEST nvmf_host_management 00:13:24.659 ************************************ 00:13:24.659 16:58:03 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:24.659 16:58:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:24.659 16:58:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:24.659 16:58:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:24.659 ************************************ 00:13:24.659 START TEST nvmf_lvol 00:13:24.659 ************************************ 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:24.659 * Looking for test storage... 00:13:24.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.659 16:58:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.660 16:58:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:31.245 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:31.245 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:31.245 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:31.245 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.245 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.246 16:58:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:31.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:13:31.246 00:13:31.246 --- 10.0.0.2 ping statistics --- 00:13:31.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.246 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:13:31.246 00:13:31.246 --- 10.0.0.1 ping statistics --- 00:13:31.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.246 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:31.246 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:31.506 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1393224 00:13:31.506 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1393224 00:13:31.506 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:31.506 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 1393224 ']' 00:13:31.506 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.506 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:31.506 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.506 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:31.506 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:31.506 [2024-05-15 16:58:10.141458] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:13:31.506 [2024-05-15 16:58:10.141560] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.506 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.506 [2024-05-15 16:58:10.215825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.506 [2024-05-15 16:58:10.290244] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.506 [2024-05-15 16:58:10.290284] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.506 [2024-05-15 16:58:10.290291] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.506 [2024-05-15 16:58:10.290298] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.506 [2024-05-15 16:58:10.290304] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.506 [2024-05-15 16:58:10.290480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.506 [2024-05-15 16:58:10.290606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.506 [2024-05-15 16:58:10.290587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.448 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:32.448 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:13:32.448 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.448 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.448 16:58:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:32.448 16:58:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.448 16:58:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:32.448 [2024-05-15 16:58:11.100185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.448 16:58:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:32.709 16:58:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:32.709 16:58:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:32.709 16:58:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:32.709 16:58:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:32.971 16:58:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:33.232 16:58:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1a2d6787-c2a9-447d-825f-a4701a3fc34e 00:13:33.232 16:58:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1a2d6787-c2a9-447d-825f-a4701a3fc34e lvol 20 00:13:33.232 16:58:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7b35b202-69bb-4602-8495-b14adfb886a4 00:13:33.232 16:58:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:33.494 16:58:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7b35b202-69bb-4602-8495-b14adfb886a4 00:13:33.756 16:58:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:33.756 [2024-05-15 16:58:12.481895] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:33.756 [2024-05-15 16:58:12.482160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.756 16:58:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:34.017 16:58:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1393887 00:13:34.017 16:58:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:34.017 16:58:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:34.017 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.960 16:58:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7b35b202-69bb-4602-8495-b14adfb886a4 MY_SNAPSHOT 00:13:35.221 16:58:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e8c694a4-52d9-4f5e-a0fe-bcad22e1961f 00:13:35.221 16:58:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7b35b202-69bb-4602-8495-b14adfb886a4 30 00:13:35.482 16:58:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e8c694a4-52d9-4f5e-a0fe-bcad22e1961f MY_CLONE 00:13:35.482 16:58:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=553a4ab9-0413-48b8-ae38-a469d154e897 00:13:35.482 16:58:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 553a4ab9-0413-48b8-ae38-a469d154e897 00:13:36.052 16:58:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1393887 00:13:44.229 Initializing NVMe Controllers 00:13:44.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:44.229 Controller IO queue size 128, less than required. 00:13:44.229 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:44.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:44.229 Initialization complete. Launching workers. 00:13:44.229 ======================================================== 00:13:44.229 Latency(us) 00:13:44.229 Device Information : IOPS MiB/s Average min max 00:13:44.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12247.87 47.84 10453.08 2052.36 60234.44 00:13:44.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17018.44 66.48 7521.24 3094.88 79360.72 00:13:44.229 ======================================================== 00:13:44.229 Total : 29266.30 114.32 8748.21 2052.36 79360.72 00:13:44.229 00:13:44.229 16:58:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:44.229 16:58:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7b35b202-69bb-4602-8495-b14adfb886a4 00:13:44.491 16:58:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1a2d6787-c2a9-447d-825f-a4701a3fc34e 00:13:44.752 16:58:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:44.752 16:58:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:44.752 16:58:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:44.752 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.752 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:44.752 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.752 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:44.752 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.752 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.752 rmmod nvme_tcp 00:13:44.752 rmmod nvme_fabrics 00:13:44.752 rmmod nvme_keyring 00:13:44.752 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.752 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1393224 ']' 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1393224 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 1393224 ']' 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 1393224 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1393224 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1393224' 00:13:44.753 killing process with pid 1393224 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 1393224 00:13:44.753 [2024-05-15 16:58:23.526010] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:44.753 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 1393224 00:13:45.013 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:45.013 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:45.013 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:45.013 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.013 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.013 16:58:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.013 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.013 16:58:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.926 16:58:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:46.926 00:13:46.926 real 0m22.689s 00:13:46.926 user 1m2.772s 00:13:46.926 sys 0m7.645s 00:13:46.926 16:58:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:46.926 16:58:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:46.926 ************************************ 00:13:46.926 END TEST nvmf_lvol 00:13:46.926 ************************************ 00:13:47.187 16:58:25 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:47.187 16:58:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:47.187 16:58:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:47.187 16:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:47.187 ************************************ 00:13:47.187 START TEST nvmf_lvs_grow 00:13:47.187 ************************************ 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:47.187 * Looking for test storage... 00:13:47.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:47.187 16:58:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:53.772 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:53.772 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.772 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:53.773 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:53.773 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:53.773 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:54.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:13:54.034 00:13:54.034 --- 10.0.0.2 ping statistics --- 00:13:54.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.034 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:13:54.034 00:13:54.034 --- 10.0.0.1 ping statistics --- 00:13:54.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.034 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:54.034 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1399880 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1399880 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 1399880 ']' 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:54.294 16:58:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:54.294 [2024-05-15 16:58:32.960855] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:13:54.294 [2024-05-15 16:58:32.960918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.294 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.294 [2024-05-15 16:58:33.031450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.294 [2024-05-15 16:58:33.106673] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.294 [2024-05-15 16:58:33.106712] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.294 [2024-05-15 16:58:33.106720] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.294 [2024-05-15 16:58:33.106726] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.294 [2024-05-15 16:58:33.106732] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.294 [2024-05-15 16:58:33.106750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:55.236 [2024-05-15 16:58:33.906138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:55.236 ************************************ 00:13:55.236 START TEST lvs_grow_clean 00:13:55.236 ************************************ 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.236 16:58:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:55.497 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:55.497 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:55.497 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:13:55.497 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:13:55.497 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:55.758 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:55.758 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:55.758 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 lvol 150 00:13:55.758 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=286c0cea-06b8-498d-a242-e92bfa2efdfb 00:13:55.758 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:56.019 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:56.019 [2024-05-15 16:58:34.714514] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:56.019 [2024-05-15 16:58:34.714568] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:56.019 true 00:13:56.019 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:13:56.019 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:56.280 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:56.280 16:58:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:56.280 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 286c0cea-06b8-498d-a242-e92bfa2efdfb 00:13:56.540 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:56.540 [2024-05-15 16:58:35.292112] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:56.540 [2024-05-15 16:58:35.292342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.540 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:56.800 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:56.800 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1400569 00:13:56.800 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:56.800 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1400569 /var/tmp/bdevperf.sock 00:13:56.800 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 1400569 ']' 00:13:56.801 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:56.801 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:56.801 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:56.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:56.801 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:56.801 16:58:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:56.801 [2024-05-15 16:58:35.479844] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:13:56.801 [2024-05-15 16:58:35.479883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400569 ] 00:13:56.801 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.801 [2024-05-15 16:58:35.548539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.801 [2024-05-15 16:58:35.612637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.740 16:58:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:57.740 16:58:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:13:57.740 16:58:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:57.740 Nvme0n1 00:13:57.740 16:58:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:58.000 [ 00:13:58.000 { 00:13:58.000 "name": "Nvme0n1", 00:13:58.000 "aliases": [ 00:13:58.000 "286c0cea-06b8-498d-a242-e92bfa2efdfb" 00:13:58.000 ], 00:13:58.000 "product_name": "NVMe disk", 00:13:58.000 "block_size": 4096, 00:13:58.000 "num_blocks": 38912, 00:13:58.000 "uuid": "286c0cea-06b8-498d-a242-e92bfa2efdfb", 00:13:58.000 "assigned_rate_limits": { 00:13:58.000 "rw_ios_per_sec": 0, 00:13:58.000 "rw_mbytes_per_sec": 0, 00:13:58.000 "r_mbytes_per_sec": 0, 00:13:58.000 "w_mbytes_per_sec": 0 00:13:58.000 }, 00:13:58.000 "claimed": false, 00:13:58.000 "zoned": false, 00:13:58.000 "supported_io_types": { 00:13:58.000 "read": true, 00:13:58.000 "write": true, 00:13:58.000 "unmap": true, 00:13:58.000 "write_zeroes": true, 00:13:58.000 "flush": true, 00:13:58.000 "reset": true, 00:13:58.000 "compare": true, 00:13:58.000 "compare_and_write": true, 00:13:58.000 "abort": true, 00:13:58.000 "nvme_admin": true, 00:13:58.000 "nvme_io": true 00:13:58.000 }, 00:13:58.000 "memory_domains": [ 00:13:58.000 { 00:13:58.000 "dma_device_id": "system", 00:13:58.000 "dma_device_type": 1 00:13:58.000 } 00:13:58.000 ], 00:13:58.000 "driver_specific": { 00:13:58.000 "nvme": [ 00:13:58.000 { 00:13:58.000 "trid": { 00:13:58.000 "trtype": "TCP", 00:13:58.000 "adrfam": "IPv4", 00:13:58.000 "traddr": "10.0.0.2", 00:13:58.000 "trsvcid": "4420", 00:13:58.000 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:58.000 }, 00:13:58.000 "ctrlr_data": { 00:13:58.000 "cntlid": 1, 00:13:58.000 "vendor_id": "0x8086", 00:13:58.000 "model_number": "SPDK bdev Controller", 00:13:58.000 "serial_number": "SPDK0", 00:13:58.000 "firmware_revision": "24.05", 00:13:58.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:58.000 "oacs": { 00:13:58.000 "security": 0, 00:13:58.000 "format": 0, 00:13:58.000 "firmware": 0, 00:13:58.000 "ns_manage": 0 00:13:58.000 }, 00:13:58.000 "multi_ctrlr": true, 00:13:58.000 "ana_reporting": false 00:13:58.000 }, 00:13:58.000 "vs": { 00:13:58.000 "nvme_version": "1.3" 00:13:58.000 }, 00:13:58.000 "ns_data": { 00:13:58.000 "id": 1, 00:13:58.000 "can_share": true 00:13:58.000 } 00:13:58.000 } 00:13:58.000 ], 00:13:58.000 "mp_policy": "active_passive" 00:13:58.000 } 00:13:58.000 } 00:13:58.000 ] 00:13:58.000 16:58:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:58.000 16:58:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1400737 00:13:58.000 16:58:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:58.000 Running I/O for 10 seconds... 00:13:58.942 Latency(us) 00:13:58.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.942 Nvme0n1 : 1.00 18319.00 71.56 0.00 0.00 0.00 0.00 0.00 00:13:58.942 =================================================================================================================== 00:13:58.942 Total : 18319.00 71.56 0.00 0.00 0.00 0.00 0.00 00:13:58.942 00:13:59.882 16:58:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:14:00.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.143 Nvme0n1 : 2.00 18399.00 71.87 0.00 0.00 0.00 0.00 0.00 00:14:00.143 =================================================================================================================== 00:14:00.143 Total : 18399.00 71.87 0.00 0.00 0.00 0.00 0.00 00:14:00.143 00:14:00.143 true 00:14:00.143 16:58:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:14:00.143 16:58:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:00.405 16:58:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:00.405 16:58:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:00.405 16:58:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1400737 00:14:00.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.975 Nvme0n1 : 3.00 18427.00 71.98 0.00 0.00 0.00 0.00 0.00 00:14:00.975 =================================================================================================================== 00:14:00.975 Total : 18427.00 71.98 0.00 0.00 0.00 0.00 0.00 00:14:00.975 00:14:02.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.357 Nvme0n1 : 4.00 18429.25 71.99 0.00 0.00 0.00 0.00 0.00 00:14:02.357 =================================================================================================================== 00:14:02.357 Total : 18429.25 71.99 0.00 0.00 0.00 0.00 0.00 00:14:02.357 00:14:03.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.298 Nvme0n1 : 5.00 18458.20 72.10 0.00 0.00 0.00 0.00 0.00 00:14:03.298 =================================================================================================================== 00:14:03.298 Total : 18458.20 72.10 0.00 0.00 0.00 0.00 0.00 00:14:03.298 00:14:04.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:04.246 Nvme0n1 : 6.00 18464.00 72.12 0.00 0.00 0.00 0.00 0.00 00:14:04.246 =================================================================================================================== 00:14:04.246 Total : 18464.00 72.12 0.00 0.00 0.00 0.00 0.00 00:14:04.246 00:14:05.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.202 Nvme0n1 : 7.00 18489.00 72.22 0.00 0.00 0.00 0.00 0.00 00:14:05.202 =================================================================================================================== 00:14:05.202 Total : 18489.00 72.22 0.00 0.00 0.00 0.00 0.00 00:14:05.202 00:14:06.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.143 Nvme0n1 : 8.00 18499.50 72.26 0.00 0.00 0.00 0.00 0.00 00:14:06.143 =================================================================================================================== 00:14:06.143 Total : 18499.50 72.26 0.00 0.00 0.00 0.00 0.00 00:14:06.143 00:14:07.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.085 Nvme0n1 : 9.00 18506.89 72.29 0.00 0.00 0.00 0.00 0.00 00:14:07.085 =================================================================================================================== 00:14:07.085 Total : 18506.89 72.29 0.00 0.00 0.00 0.00 0.00 00:14:07.085 00:14:08.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.027 Nvme0n1 : 10.00 18514.90 72.32 0.00 0.00 0.00 0.00 0.00 00:14:08.027 =================================================================================================================== 00:14:08.027 Total : 18514.90 72.32 0.00 0.00 0.00 0.00 0.00 00:14:08.027 00:14:08.027 00:14:08.027 Latency(us) 00:14:08.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.027 Nvme0n1 : 10.00 18514.08 72.32 0.00 0.00 6909.14 2348.37 13325.65 00:14:08.027 =================================================================================================================== 00:14:08.027 Total : 18514.08 72.32 0.00 0.00 6909.14 2348.37 13325.65 00:14:08.027 0 00:14:08.027 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1400569 00:14:08.027 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 1400569 ']' 00:14:08.027 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 1400569 00:14:08.027 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:14:08.028 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:08.028 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1400569 00:14:08.028 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:08.028 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:08.028 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1400569' 00:14:08.028 killing process with pid 1400569 00:14:08.028 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 1400569 00:14:08.028 Received shutdown signal, test time was about 10.000000 seconds 00:14:08.028 00:14:08.028 Latency(us) 00:14:08.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.028 =================================================================================================================== 00:14:08.028 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:08.028 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 1400569 00:14:08.288 16:58:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:08.288 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:08.549 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:14:08.549 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:08.810 [2024-05-15 16:58:47.577036] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:08.810 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:14:09.071 request: 00:14:09.071 { 00:14:09.071 "uuid": "27b5418f-ec3d-4ad7-ab9b-582876e497a7", 00:14:09.071 "method": "bdev_lvol_get_lvstores", 00:14:09.071 "req_id": 1 00:14:09.071 } 00:14:09.071 Got JSON-RPC error response 00:14:09.071 response: 00:14:09.071 { 00:14:09.071 "code": -19, 00:14:09.071 "message": "No such device" 00:14:09.071 } 00:14:09.071 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:09.071 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:09.071 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:09.071 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:09.071 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:09.333 aio_bdev 00:14:09.333 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 286c0cea-06b8-498d-a242-e92bfa2efdfb 00:14:09.333 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=286c0cea-06b8-498d-a242-e92bfa2efdfb 00:14:09.333 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:09.333 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:14:09.333 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:09.333 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:09.333 16:58:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:09.333 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 286c0cea-06b8-498d-a242-e92bfa2efdfb -t 2000 00:14:09.594 [ 00:14:09.594 { 00:14:09.594 "name": "286c0cea-06b8-498d-a242-e92bfa2efdfb", 00:14:09.594 "aliases": [ 00:14:09.594 "lvs/lvol" 00:14:09.594 ], 00:14:09.594 "product_name": "Logical Volume", 00:14:09.594 "block_size": 4096, 00:14:09.594 "num_blocks": 38912, 00:14:09.594 "uuid": "286c0cea-06b8-498d-a242-e92bfa2efdfb", 00:14:09.594 "assigned_rate_limits": { 00:14:09.594 "rw_ios_per_sec": 0, 00:14:09.594 "rw_mbytes_per_sec": 0, 00:14:09.594 "r_mbytes_per_sec": 0, 00:14:09.594 "w_mbytes_per_sec": 0 00:14:09.594 }, 00:14:09.594 "claimed": false, 00:14:09.594 "zoned": false, 00:14:09.594 "supported_io_types": { 00:14:09.594 "read": true, 00:14:09.594 "write": true, 00:14:09.594 "unmap": true, 00:14:09.594 "write_zeroes": true, 00:14:09.594 "flush": false, 00:14:09.594 "reset": true, 00:14:09.594 "compare": false, 00:14:09.594 "compare_and_write": false, 00:14:09.594 "abort": false, 00:14:09.594 "nvme_admin": false, 00:14:09.594 "nvme_io": false 00:14:09.594 }, 00:14:09.594 "driver_specific": { 00:14:09.594 "lvol": { 00:14:09.594 "lvol_store_uuid": "27b5418f-ec3d-4ad7-ab9b-582876e497a7", 00:14:09.594 "base_bdev": "aio_bdev", 00:14:09.594 "thin_provision": false, 00:14:09.594 "num_allocated_clusters": 38, 00:14:09.594 "snapshot": false, 00:14:09.594 "clone": false, 00:14:09.594 "esnap_clone": false 00:14:09.594 } 00:14:09.594 } 00:14:09.594 } 00:14:09.594 ] 00:14:09.594 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:14:09.594 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:14:09.594 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:09.594 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:09.594 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:14:09.594 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:09.855 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:09.855 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 286c0cea-06b8-498d-a242-e92bfa2efdfb 00:14:09.855 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27b5418f-ec3d-4ad7-ab9b-582876e497a7 00:14:10.116 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:10.376 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:10.376 00:14:10.376 real 0m15.056s 00:14:10.376 user 0m14.883s 00:14:10.376 sys 0m1.128s 00:14:10.376 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:10.376 16:58:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:10.376 ************************************ 00:14:10.376 END TEST lvs_grow_clean 00:14:10.376 ************************************ 00:14:10.376 16:58:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:10.376 16:58:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:10.376 16:58:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:10.376 16:58:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:10.376 ************************************ 00:14:10.376 START TEST lvs_grow_dirty 00:14:10.376 ************************************ 00:14:10.376 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:14:10.376 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:10.376 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:10.376 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:10.376 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:10.376 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:10.376 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:10.377 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:10.377 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:10.377 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:10.637 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:10.637 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:10.637 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:10.637 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:10.637 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:10.897 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:10.897 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:10.897 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 lvol 150 00:14:10.897 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2cb30934-836e-4c30-a38e-4781dd77cc25 00:14:10.897 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:10.897 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:11.158 [2024-05-15 16:58:49.824519] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:11.158 [2024-05-15 16:58:49.824571] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:11.158 true 00:14:11.158 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:11.158 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:11.158 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:11.158 16:58:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:11.419 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2cb30934-836e-4c30-a38e-4781dd77cc25 00:14:11.680 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:11.680 [2024-05-15 16:58:50.410339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.680 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1403507 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1403507 /var/tmp/bdevperf.sock 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1403507 ']' 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:11.940 [2024-05-15 16:58:50.596885] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:14:11.940 [2024-05-15 16:58:50.596924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403507 ] 00:14:11.940 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.940 [2024-05-15 16:58:50.639801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.940 [2024-05-15 16:58:50.693863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:11.940 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:12.201 Nvme0n1 00:14:12.201 16:58:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:12.462 [ 00:14:12.462 { 00:14:12.462 "name": "Nvme0n1", 00:14:12.462 "aliases": [ 00:14:12.462 "2cb30934-836e-4c30-a38e-4781dd77cc25" 00:14:12.462 ], 00:14:12.462 "product_name": "NVMe disk", 00:14:12.462 "block_size": 4096, 00:14:12.462 "num_blocks": 38912, 00:14:12.462 "uuid": "2cb30934-836e-4c30-a38e-4781dd77cc25", 00:14:12.462 "assigned_rate_limits": { 00:14:12.462 "rw_ios_per_sec": 0, 00:14:12.462 "rw_mbytes_per_sec": 0, 00:14:12.462 "r_mbytes_per_sec": 0, 00:14:12.462 "w_mbytes_per_sec": 0 00:14:12.462 }, 00:14:12.462 "claimed": false, 00:14:12.462 "zoned": false, 00:14:12.462 "supported_io_types": { 00:14:12.462 "read": true, 00:14:12.462 "write": true, 00:14:12.462 "unmap": true, 00:14:12.462 "write_zeroes": true, 00:14:12.462 "flush": true, 00:14:12.462 "reset": true, 00:14:12.462 "compare": true, 00:14:12.462 "compare_and_write": true, 00:14:12.462 "abort": true, 00:14:12.462 "nvme_admin": true, 00:14:12.462 "nvme_io": true 00:14:12.462 }, 00:14:12.462 "memory_domains": [ 00:14:12.462 { 00:14:12.462 "dma_device_id": "system", 00:14:12.462 "dma_device_type": 1 00:14:12.462 } 00:14:12.462 ], 00:14:12.462 "driver_specific": { 00:14:12.462 "nvme": [ 00:14:12.462 { 00:14:12.462 "trid": { 00:14:12.462 "trtype": "TCP", 00:14:12.462 "adrfam": "IPv4", 00:14:12.462 "traddr": "10.0.0.2", 00:14:12.462 "trsvcid": "4420", 00:14:12.462 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:12.462 }, 00:14:12.462 "ctrlr_data": { 00:14:12.462 "cntlid": 1, 00:14:12.462 "vendor_id": "0x8086", 00:14:12.462 "model_number": "SPDK bdev Controller", 00:14:12.462 "serial_number": "SPDK0", 00:14:12.462 "firmware_revision": "24.05", 00:14:12.462 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:12.462 "oacs": { 00:14:12.462 "security": 0, 00:14:12.462 "format": 0, 00:14:12.462 "firmware": 0, 00:14:12.462 "ns_manage": 0 00:14:12.462 }, 00:14:12.462 "multi_ctrlr": true, 00:14:12.462 "ana_reporting": false 00:14:12.462 }, 00:14:12.462 "vs": { 00:14:12.462 "nvme_version": "1.3" 00:14:12.462 }, 00:14:12.462 "ns_data": { 00:14:12.462 "id": 1, 00:14:12.462 "can_share": true 00:14:12.462 } 00:14:12.462 } 00:14:12.462 ], 00:14:12.462 "mp_policy": "active_passive" 00:14:12.462 } 00:14:12.462 } 00:14:12.462 ] 00:14:12.462 16:58:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1403611 00:14:12.463 16:58:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:12.463 16:58:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:12.463 Running I/O for 10 seconds... 00:14:13.496 Latency(us) 00:14:13.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.496 Nvme0n1 : 1.00 18121.00 70.79 0.00 0.00 0.00 0.00 0.00 00:14:13.496 =================================================================================================================== 00:14:13.496 Total : 18121.00 70.79 0.00 0.00 0.00 0.00 0.00 00:14:13.496 00:14:14.437 16:58:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:14.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.437 Nvme0n1 : 2.00 18241.50 71.26 0.00 0.00 0.00 0.00 0.00 00:14:14.437 =================================================================================================================== 00:14:14.437 Total : 18241.50 71.26 0.00 0.00 0.00 0.00 0.00 00:14:14.437 00:14:14.698 true 00:14:14.698 16:58:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:14.698 16:58:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:14.698 16:58:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:14.698 16:58:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:14.698 16:58:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1403611 00:14:15.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.641 Nvme0n1 : 3.00 18283.33 71.42 0.00 0.00 0.00 0.00 0.00 00:14:15.641 =================================================================================================================== 00:14:15.641 Total : 18283.33 71.42 0.00 0.00 0.00 0.00 0.00 00:14:15.641 00:14:16.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.584 Nvme0n1 : 4.00 18320.00 71.56 0.00 0.00 0.00 0.00 0.00 00:14:16.584 =================================================================================================================== 00:14:16.584 Total : 18320.00 71.56 0.00 0.00 0.00 0.00 0.00 00:14:16.584 00:14:17.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.527 Nvme0n1 : 5.00 18341.60 71.65 0.00 0.00 0.00 0.00 0.00 00:14:17.527 =================================================================================================================== 00:14:17.527 Total : 18341.60 71.65 0.00 0.00 0.00 0.00 0.00 00:14:17.527 00:14:18.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.469 Nvme0n1 : 6.00 18365.50 71.74 0.00 0.00 0.00 0.00 0.00 00:14:18.470 =================================================================================================================== 00:14:18.470 Total : 18365.50 71.74 0.00 0.00 0.00 0.00 0.00 00:14:18.470 00:14:19.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.411 Nvme0n1 : 7.00 18383.43 71.81 0.00 0.00 0.00 0.00 0.00 00:14:19.411 =================================================================================================================== 00:14:19.411 Total : 18383.43 71.81 0.00 0.00 0.00 0.00 0.00 00:14:19.411 00:14:20.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.796 Nvme0n1 : 8.00 18391.25 71.84 0.00 0.00 0.00 0.00 0.00 00:14:20.796 =================================================================================================================== 00:14:20.796 Total : 18391.25 71.84 0.00 0.00 0.00 0.00 0.00 00:14:20.796 00:14:21.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.738 Nvme0n1 : 9.00 18401.00 71.88 0.00 0.00 0.00 0.00 0.00 00:14:21.738 =================================================================================================================== 00:14:21.738 Total : 18401.00 71.88 0.00 0.00 0.00 0.00 0.00 00:14:21.738 00:14:22.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.680 Nvme0n1 : 10.00 18403.80 71.89 0.00 0.00 0.00 0.00 0.00 00:14:22.680 =================================================================================================================== 00:14:22.680 Total : 18403.80 71.89 0.00 0.00 0.00 0.00 0.00 00:14:22.680 00:14:22.680 00:14:22.680 Latency(us) 00:14:22.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.680 Nvme0n1 : 10.00 18408.58 71.91 0.00 0.00 6950.44 4341.76 15728.64 00:14:22.680 =================================================================================================================== 00:14:22.680 Total : 18408.58 71.91 0.00 0.00 6950.44 4341.76 15728.64 00:14:22.680 0 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1403507 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 1403507 ']' 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 1403507 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1403507 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1403507' 00:14:22.680 killing process with pid 1403507 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 1403507 00:14:22.680 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.680 00:14:22.680 Latency(us) 00:14:22.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.680 =================================================================================================================== 00:14:22.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 1403507 00:14:22.680 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.941 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1399880 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1399880 00:14:23.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1399880 Killed "${NVMF_APP[@]}" "$@" 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1405610 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1405610 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1405610 ']' 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.202 16:59:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:23.463 [2024-05-15 16:59:02.041230] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:14:23.463 [2024-05-15 16:59:02.041295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.463 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.463 [2024-05-15 16:59:02.113953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.463 [2024-05-15 16:59:02.179647] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.463 [2024-05-15 16:59:02.179681] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.463 [2024-05-15 16:59:02.179688] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.463 [2024-05-15 16:59:02.179695] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.463 [2024-05-15 16:59:02.179700] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.463 [2024-05-15 16:59:02.179718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.034 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:24.034 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:24.034 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.034 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:24.034 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:24.034 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.034 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:24.296 [2024-05-15 16:59:02.964392] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:24.296 [2024-05-15 16:59:02.964478] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:24.296 [2024-05-15 16:59:02.964509] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:24.296 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:24.296 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2cb30934-836e-4c30-a38e-4781dd77cc25 00:14:24.296 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=2cb30934-836e-4c30-a38e-4781dd77cc25 00:14:24.296 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:24.296 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:24.296 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:24.296 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:24.296 16:59:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:24.557 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2cb30934-836e-4c30-a38e-4781dd77cc25 -t 2000 00:14:24.557 [ 00:14:24.557 { 00:14:24.557 "name": "2cb30934-836e-4c30-a38e-4781dd77cc25", 00:14:24.557 "aliases": [ 00:14:24.557 "lvs/lvol" 00:14:24.557 ], 00:14:24.557 "product_name": "Logical Volume", 00:14:24.557 "block_size": 4096, 00:14:24.557 "num_blocks": 38912, 00:14:24.557 "uuid": "2cb30934-836e-4c30-a38e-4781dd77cc25", 00:14:24.557 "assigned_rate_limits": { 00:14:24.557 "rw_ios_per_sec": 0, 00:14:24.557 "rw_mbytes_per_sec": 0, 00:14:24.557 "r_mbytes_per_sec": 0, 00:14:24.557 "w_mbytes_per_sec": 0 00:14:24.557 }, 00:14:24.557 "claimed": false, 00:14:24.557 "zoned": false, 00:14:24.557 "supported_io_types": { 00:14:24.557 "read": true, 00:14:24.557 "write": true, 00:14:24.557 "unmap": true, 00:14:24.557 "write_zeroes": true, 00:14:24.557 "flush": false, 00:14:24.557 "reset": true, 00:14:24.557 "compare": false, 00:14:24.557 "compare_and_write": false, 00:14:24.557 "abort": false, 00:14:24.557 "nvme_admin": false, 00:14:24.557 "nvme_io": false 00:14:24.557 }, 00:14:24.557 "driver_specific": { 00:14:24.557 "lvol": { 00:14:24.557 "lvol_store_uuid": "1ab954c9-c65d-42e5-bc1f-02463db8ddc2", 00:14:24.557 "base_bdev": "aio_bdev", 00:14:24.557 "thin_provision": false, 00:14:24.557 "num_allocated_clusters": 38, 00:14:24.557 "snapshot": false, 00:14:24.558 "clone": false, 00:14:24.558 "esnap_clone": false 00:14:24.558 } 00:14:24.558 } 00:14:24.558 } 00:14:24.558 ] 00:14:24.558 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:24.558 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:24.558 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:24.819 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:24.819 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:24.819 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:24.819 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:24.819 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:25.080 [2024-05-15 16:59:03.736317] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:25.080 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:25.341 request: 00:14:25.341 { 00:14:25.341 "uuid": "1ab954c9-c65d-42e5-bc1f-02463db8ddc2", 00:14:25.341 "method": "bdev_lvol_get_lvstores", 00:14:25.341 "req_id": 1 00:14:25.341 } 00:14:25.341 Got JSON-RPC error response 00:14:25.341 response: 00:14:25.341 { 00:14:25.341 "code": -19, 00:14:25.341 "message": "No such device" 00:14:25.341 } 00:14:25.341 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:25.341 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:25.341 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:25.341 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:25.341 16:59:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:25.341 aio_bdev 00:14:25.341 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2cb30934-836e-4c30-a38e-4781dd77cc25 00:14:25.341 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=2cb30934-836e-4c30-a38e-4781dd77cc25 00:14:25.341 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:25.341 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:25.341 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:25.341 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:25.341 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:25.602 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2cb30934-836e-4c30-a38e-4781dd77cc25 -t 2000 00:14:25.602 [ 00:14:25.602 { 00:14:25.602 "name": "2cb30934-836e-4c30-a38e-4781dd77cc25", 00:14:25.602 "aliases": [ 00:14:25.602 "lvs/lvol" 00:14:25.602 ], 00:14:25.602 "product_name": "Logical Volume", 00:14:25.602 "block_size": 4096, 00:14:25.602 "num_blocks": 38912, 00:14:25.602 "uuid": "2cb30934-836e-4c30-a38e-4781dd77cc25", 00:14:25.602 "assigned_rate_limits": { 00:14:25.602 "rw_ios_per_sec": 0, 00:14:25.602 "rw_mbytes_per_sec": 0, 00:14:25.602 "r_mbytes_per_sec": 0, 00:14:25.602 "w_mbytes_per_sec": 0 00:14:25.602 }, 00:14:25.602 "claimed": false, 00:14:25.603 "zoned": false, 00:14:25.603 "supported_io_types": { 00:14:25.603 "read": true, 00:14:25.603 "write": true, 00:14:25.603 "unmap": true, 00:14:25.603 "write_zeroes": true, 00:14:25.603 "flush": false, 00:14:25.603 "reset": true, 00:14:25.603 "compare": false, 00:14:25.603 "compare_and_write": false, 00:14:25.603 "abort": false, 00:14:25.603 "nvme_admin": false, 00:14:25.603 "nvme_io": false 00:14:25.603 }, 00:14:25.603 "driver_specific": { 00:14:25.603 "lvol": { 00:14:25.603 "lvol_store_uuid": "1ab954c9-c65d-42e5-bc1f-02463db8ddc2", 00:14:25.603 "base_bdev": "aio_bdev", 00:14:25.603 "thin_provision": false, 00:14:25.603 "num_allocated_clusters": 38, 00:14:25.603 "snapshot": false, 00:14:25.603 "clone": false, 00:14:25.603 "esnap_clone": false 00:14:25.603 } 00:14:25.603 } 00:14:25.603 } 00:14:25.603 ] 00:14:25.603 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:25.603 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:25.603 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:25.863 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:25.863 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:25.864 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:25.864 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:25.864 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2cb30934-836e-4c30-a38e-4781dd77cc25 00:14:26.124 16:59:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ab954c9-c65d-42e5-bc1f-02463db8ddc2 00:14:26.384 16:59:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:26.384 16:59:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:26.384 00:14:26.384 real 0m16.156s 00:14:26.384 user 0m42.695s 00:14:26.384 sys 0m2.621s 00:14:26.384 16:59:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:26.384 16:59:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:26.384 ************************************ 00:14:26.384 END TEST lvs_grow_dirty 00:14:26.384 ************************************ 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:26.645 nvmf_trace.0 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.645 rmmod nvme_tcp 00:14:26.645 rmmod nvme_fabrics 00:14:26.645 rmmod nvme_keyring 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1405610 ']' 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1405610 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 1405610 ']' 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 1405610 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1405610 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1405610' 00:14:26.645 killing process with pid 1405610 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 1405610 00:14:26.645 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 1405610 00:14:26.906 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.906 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.906 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.906 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.906 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.906 16:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.906 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.906 16:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.821 16:59:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:28.821 00:14:28.821 real 0m41.830s 00:14:28.821 user 1m3.489s 00:14:28.821 sys 0m9.257s 00:14:28.821 16:59:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:28.821 16:59:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:28.821 ************************************ 00:14:28.821 END TEST nvmf_lvs_grow 00:14:28.821 ************************************ 00:14:29.082 16:59:07 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:29.082 16:59:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:29.082 16:59:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:29.082 16:59:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.082 ************************************ 00:14:29.082 START TEST nvmf_bdev_io_wait 00:14:29.082 ************************************ 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:29.082 * Looking for test storage... 00:14:29.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:29.082 16:59:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:37.220 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:37.221 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:37.221 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:37.221 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:37.221 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:37.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:14:37.221 00:14:37.221 --- 10.0.0.2 ping statistics --- 00:14:37.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.221 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:14:37.221 00:14:37.221 --- 10.0.0.1 ping statistics --- 00:14:37.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.221 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1410613 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1410613 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 1410613 ']' 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:37.221 16:59:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.221 [2024-05-15 16:59:15.020196] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:14:37.221 [2024-05-15 16:59:15.020245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.221 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.221 [2024-05-15 16:59:15.085550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:37.221 [2024-05-15 16:59:15.152892] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.221 [2024-05-15 16:59:15.152930] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.221 [2024-05-15 16:59:15.152937] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.221 [2024-05-15 16:59:15.152944] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.221 [2024-05-15 16:59:15.152949] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.221 [2024-05-15 16:59:15.153085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.221 [2024-05-15 16:59:15.153202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.222 [2024-05-15 16:59:15.153357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.222 [2024-05-15 16:59:15.153358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.222 [2024-05-15 16:59:15.893533] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.222 Malloc0 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.222 [2024-05-15 16:59:15.959696] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:37.222 [2024-05-15 16:59:15.959931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1410650 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1410652 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:37.222 { 00:14:37.222 "params": { 00:14:37.222 "name": "Nvme$subsystem", 00:14:37.222 "trtype": "$TEST_TRANSPORT", 00:14:37.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:37.222 "adrfam": "ipv4", 00:14:37.222 "trsvcid": "$NVMF_PORT", 00:14:37.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:37.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:37.222 "hdgst": ${hdgst:-false}, 00:14:37.222 "ddgst": ${ddgst:-false} 00:14:37.222 }, 00:14:37.222 "method": "bdev_nvme_attach_controller" 00:14:37.222 } 00:14:37.222 EOF 00:14:37.222 )") 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1410654 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:37.222 { 00:14:37.222 "params": { 00:14:37.222 "name": "Nvme$subsystem", 00:14:37.222 "trtype": "$TEST_TRANSPORT", 00:14:37.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:37.222 "adrfam": "ipv4", 00:14:37.222 "trsvcid": "$NVMF_PORT", 00:14:37.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:37.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:37.222 "hdgst": ${hdgst:-false}, 00:14:37.222 "ddgst": ${ddgst:-false} 00:14:37.222 }, 00:14:37.222 "method": "bdev_nvme_attach_controller" 00:14:37.222 } 00:14:37.222 EOF 00:14:37.222 )") 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1410657 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:37.222 { 00:14:37.222 "params": { 00:14:37.222 "name": "Nvme$subsystem", 00:14:37.222 "trtype": "$TEST_TRANSPORT", 00:14:37.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:37.222 "adrfam": "ipv4", 00:14:37.222 "trsvcid": "$NVMF_PORT", 00:14:37.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:37.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:37.222 "hdgst": ${hdgst:-false}, 00:14:37.222 "ddgst": ${ddgst:-false} 00:14:37.222 }, 00:14:37.222 "method": "bdev_nvme_attach_controller" 00:14:37.222 } 00:14:37.222 EOF 00:14:37.222 )") 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:37.222 { 00:14:37.222 "params": { 00:14:37.222 "name": "Nvme$subsystem", 00:14:37.222 "trtype": "$TEST_TRANSPORT", 00:14:37.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:37.222 "adrfam": "ipv4", 00:14:37.222 "trsvcid": "$NVMF_PORT", 00:14:37.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:37.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:37.222 "hdgst": ${hdgst:-false}, 00:14:37.222 "ddgst": ${ddgst:-false} 00:14:37.222 }, 00:14:37.222 "method": "bdev_nvme_attach_controller" 00:14:37.222 } 00:14:37.222 EOF 00:14:37.222 )") 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1410650 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:37.222 "params": { 00:14:37.222 "name": "Nvme1", 00:14:37.222 "trtype": "tcp", 00:14:37.222 "traddr": "10.0.0.2", 00:14:37.222 "adrfam": "ipv4", 00:14:37.222 "trsvcid": "4420", 00:14:37.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:37.222 "hdgst": false, 00:14:37.222 "ddgst": false 00:14:37.222 }, 00:14:37.222 "method": "bdev_nvme_attach_controller" 00:14:37.222 }' 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:37.222 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:37.222 "params": { 00:14:37.222 "name": "Nvme1", 00:14:37.222 "trtype": "tcp", 00:14:37.222 "traddr": "10.0.0.2", 00:14:37.222 "adrfam": "ipv4", 00:14:37.222 "trsvcid": "4420", 00:14:37.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:37.222 "hdgst": false, 00:14:37.222 "ddgst": false 00:14:37.222 }, 00:14:37.222 "method": "bdev_nvme_attach_controller" 00:14:37.222 }' 00:14:37.223 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:37.223 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:37.223 "params": { 00:14:37.223 "name": "Nvme1", 00:14:37.223 "trtype": "tcp", 00:14:37.223 "traddr": "10.0.0.2", 00:14:37.223 "adrfam": "ipv4", 00:14:37.223 "trsvcid": "4420", 00:14:37.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:37.223 "hdgst": false, 00:14:37.223 "ddgst": false 00:14:37.223 }, 00:14:37.223 "method": "bdev_nvme_attach_controller" 00:14:37.223 }' 00:14:37.223 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:37.223 16:59:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:37.223 "params": { 00:14:37.223 "name": "Nvme1", 00:14:37.223 "trtype": "tcp", 00:14:37.223 "traddr": "10.0.0.2", 00:14:37.223 "adrfam": "ipv4", 00:14:37.223 "trsvcid": "4420", 00:14:37.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:37.223 "hdgst": false, 00:14:37.223 "ddgst": false 00:14:37.223 }, 00:14:37.223 "method": "bdev_nvme_attach_controller" 00:14:37.223 }' 00:14:37.223 [2024-05-15 16:59:16.011638] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:14:37.223 [2024-05-15 16:59:16.011690] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:37.223 [2024-05-15 16:59:16.013030] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:14:37.223 [2024-05-15 16:59:16.013075] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:37.223 [2024-05-15 16:59:16.014563] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:14:37.223 [2024-05-15 16:59:16.014613] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:37.223 [2024-05-15 16:59:16.014855] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:14:37.223 [2024-05-15 16:59:16.014900] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:37.483 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.483 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.483 [2024-05-15 16:59:16.152899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.483 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.483 [2024-05-15 16:59:16.203888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:37.483 [2024-05-15 16:59:16.213452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.483 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.483 [2024-05-15 16:59:16.262005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.483 [2024-05-15 16:59:16.265302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:37.483 [2024-05-15 16:59:16.312027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:37.743 [2024-05-15 16:59:16.326724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.743 [2024-05-15 16:59:16.376188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:37.743 Running I/O for 1 seconds... 00:14:37.743 Running I/O for 1 seconds... 00:14:37.743 Running I/O for 1 seconds... 00:14:38.004 Running I/O for 1 seconds... 00:14:38.573 00:14:38.573 Latency(us) 00:14:38.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.574 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:38.574 Nvme1n1 : 1.00 14752.52 57.63 0.00 0.00 8653.05 4642.13 18677.76 00:14:38.574 =================================================================================================================== 00:14:38.574 Total : 14752.52 57.63 0.00 0.00 8653.05 4642.13 18677.76 00:14:38.833 00:14:38.833 Latency(us) 00:14:38.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.834 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:38.834 Nvme1n1 : 1.00 185815.52 725.84 0.00 0.00 686.10 276.48 771.41 00:14:38.834 =================================================================================================================== 00:14:38.834 Total : 185815.52 725.84 0.00 0.00 686.10 276.48 771.41 00:14:38.834 00:14:38.834 Latency(us) 00:14:38.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.834 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:38.834 Nvme1n1 : 1.00 17564.79 68.61 0.00 0.00 7267.87 3877.55 18677.76 00:14:38.834 =================================================================================================================== 00:14:38.834 Total : 17564.79 68.61 0.00 0.00 7267.87 3877.55 18677.76 00:14:38.834 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1410652 00:14:39.093 00:14:39.093 Latency(us) 00:14:39.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.093 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:39.093 Nvme1n1 : 1.01 12058.25 47.10 0.00 0.00 10579.50 6034.77 21408.43 00:14:39.093 =================================================================================================================== 00:14:39.093 Total : 12058.25 47.10 0.00 0.00 10579.50 6034.77 21408.43 00:14:39.093 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1410654 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1410657 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:39.094 rmmod nvme_tcp 00:14:39.094 rmmod nvme_fabrics 00:14:39.094 rmmod nvme_keyring 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1410613 ']' 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1410613 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 1410613 ']' 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 1410613 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:39.094 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1410613 00:14:39.354 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:39.354 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:39.354 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1410613' 00:14:39.354 killing process with pid 1410613 00:14:39.354 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 1410613 00:14:39.354 [2024-05-15 16:59:17.950983] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:39.354 16:59:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 1410613 00:14:39.354 16:59:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.354 16:59:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.354 16:59:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.354 16:59:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.354 16:59:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.354 16:59:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.354 16:59:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.354 16:59:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.898 16:59:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:41.898 00:14:41.898 real 0m12.476s 00:14:41.898 user 0m18.995s 00:14:41.898 sys 0m6.729s 00:14:41.898 16:59:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:41.898 16:59:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:41.898 ************************************ 00:14:41.898 END TEST nvmf_bdev_io_wait 00:14:41.898 ************************************ 00:14:41.898 16:59:20 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:41.898 16:59:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:41.898 16:59:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.898 16:59:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:41.898 ************************************ 00:14:41.898 START TEST nvmf_queue_depth 00:14:41.898 ************************************ 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:41.899 * Looking for test storage... 00:14:41.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:41.899 16:59:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:48.482 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:48.482 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:48.482 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:48.482 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:48.482 16:59:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:48.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.900 ms 00:14:48.483 00:14:48.483 --- 10.0.0.2 ping statistics --- 00:14:48.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.483 rtt min/avg/max/mdev = 0.900/0.900/0.900/0.000 ms 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:14:48.483 00:14:48.483 --- 10.0.0.1 ping statistics --- 00:14:48.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.483 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1415295 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1415295 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1415295 ']' 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:48.483 16:59:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:48.483 [2024-05-15 16:59:27.257626] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:14:48.483 [2024-05-15 16:59:27.257695] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.483 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.807 [2024-05-15 16:59:27.346467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.807 [2024-05-15 16:59:27.438811] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.807 [2024-05-15 16:59:27.438863] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.807 [2024-05-15 16:59:27.438871] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.807 [2024-05-15 16:59:27.438878] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.807 [2024-05-15 16:59:27.438884] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.807 [2024-05-15 16:59:27.438908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:49.378 [2024-05-15 16:59:28.094162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:49.378 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:49.379 Malloc0 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:49.379 [2024-05-15 16:59:28.168227] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:49.379 [2024-05-15 16:59:28.168521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1415335 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1415335 /var/tmp/bdevperf.sock 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1415335 ']' 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:49.379 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:49.640 [2024-05-15 16:59:28.222927] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:14:49.640 [2024-05-15 16:59:28.222992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415335 ] 00:14:49.640 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.640 [2024-05-15 16:59:28.288298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.640 [2024-05-15 16:59:28.363661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.270 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:50.270 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:14:50.270 16:59:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:50.270 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.270 16:59:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:50.530 NVMe0n1 00:14:50.530 16:59:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.530 16:59:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:50.530 Running I/O for 10 seconds... 00:15:02.753 00:15:02.753 Latency(us) 00:15:02.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.753 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:02.753 Verification LBA range: start 0x0 length 0x4000 00:15:02.753 NVMe0n1 : 10.07 11276.31 44.05 0.00 0.00 90491.02 25012.91 64225.28 00:15:02.753 =================================================================================================================== 00:15:02.753 Total : 11276.31 44.05 0.00 0.00 90491.02 25012.91 64225.28 00:15:02.753 0 00:15:02.753 16:59:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1415335 00:15:02.753 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1415335 ']' 00:15:02.753 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1415335 00:15:02.753 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:02.753 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:02.753 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1415335 00:15:02.753 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:02.753 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:02.753 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1415335' 00:15:02.753 killing process with pid 1415335 00:15:02.753 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1415335 00:15:02.753 Received shutdown signal, test time was about 10.000000 seconds 00:15:02.753 00:15:02.753 Latency(us) 00:15:02.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.753 =================================================================================================================== 00:15:02.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1415335 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.754 rmmod nvme_tcp 00:15:02.754 rmmod nvme_fabrics 00:15:02.754 rmmod nvme_keyring 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1415295 ']' 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1415295 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1415295 ']' 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1415295 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1415295 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1415295' 00:15:02.754 killing process with pid 1415295 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1415295 00:15:02.754 [2024-05-15 16:59:39.694697] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1415295 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.754 16:59:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.326 16:59:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:03.326 00:15:03.326 real 0m21.681s 00:15:03.326 user 0m25.566s 00:15:03.326 sys 0m6.281s 00:15:03.326 16:59:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:03.326 16:59:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:03.326 ************************************ 00:15:03.326 END TEST nvmf_queue_depth 00:15:03.326 ************************************ 00:15:03.326 16:59:41 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:03.326 16:59:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:03.326 16:59:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:03.326 16:59:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:03.326 ************************************ 00:15:03.326 START TEST nvmf_target_multipath 00:15:03.326 ************************************ 00:15:03.326 16:59:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:03.326 * Looking for test storage... 00:15:03.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:03.326 16:59:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:03.327 16:59:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:11.461 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:11.461 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:11.461 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:11.461 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:11.461 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:11.462 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.462 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.462 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.462 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.462 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:11.462 16:59:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:11.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:15:11.462 00:15:11.462 --- 10.0.0.2 ping statistics --- 00:15:11.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.462 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:15:11.462 00:15:11.462 --- 10.0.0.1 ping statistics --- 00:15:11.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.462 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:11.462 only one NIC for nvmf test 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:11.462 rmmod nvme_tcp 00:15:11.462 rmmod nvme_fabrics 00:15:11.462 rmmod nvme_keyring 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.462 16:59:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.845 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:12.845 16:59:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:12.845 16:59:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:12.845 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.845 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:12.846 00:15:12.846 real 0m9.358s 00:15:12.846 user 0m2.090s 00:15:12.846 sys 0m5.178s 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:12.846 16:59:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:12.846 ************************************ 00:15:12.846 END TEST nvmf_target_multipath 00:15:12.846 ************************************ 00:15:12.846 16:59:51 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:12.846 16:59:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:12.846 16:59:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:12.846 16:59:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:12.846 ************************************ 00:15:12.846 START TEST nvmf_zcopy 00:15:12.846 ************************************ 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:12.846 * Looking for test storage... 00:15:12.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:12.846 16:59:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:19.438 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:19.438 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:19.438 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:19.438 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.438 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:19.439 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:19.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:15:19.701 00:15:19.701 --- 10.0.0.2 ping statistics --- 00:15:19.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.701 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:15:19.701 00:15:19.701 --- 10.0.0.1 ping statistics --- 00:15:19.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.701 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1425798 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1425798 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 1425798 ']' 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:19.701 16:59:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:19.701 [2024-05-15 16:59:58.437424] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:15:19.701 [2024-05-15 16:59:58.437487] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.701 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.701 [2024-05-15 16:59:58.525850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.962 [2024-05-15 16:59:58.617900] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.962 [2024-05-15 16:59:58.617956] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.962 [2024-05-15 16:59:58.617964] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.962 [2024-05-15 16:59:58.617971] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.962 [2024-05-15 16:59:58.617977] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.962 [2024-05-15 16:59:58.618008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.536 [2024-05-15 16:59:59.277284] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.536 [2024-05-15 16:59:59.301282] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:20.536 [2024-05-15 16:59:59.301531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.536 malloc0 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:20.536 { 00:15:20.536 "params": { 00:15:20.536 "name": "Nvme$subsystem", 00:15:20.536 "trtype": "$TEST_TRANSPORT", 00:15:20.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:20.536 "adrfam": "ipv4", 00:15:20.536 "trsvcid": "$NVMF_PORT", 00:15:20.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:20.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:20.536 "hdgst": ${hdgst:-false}, 00:15:20.536 "ddgst": ${ddgst:-false} 00:15:20.536 }, 00:15:20.536 "method": "bdev_nvme_attach_controller" 00:15:20.536 } 00:15:20.536 EOF 00:15:20.536 )") 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:20.536 16:59:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:20.536 "params": { 00:15:20.536 "name": "Nvme1", 00:15:20.536 "trtype": "tcp", 00:15:20.536 "traddr": "10.0.0.2", 00:15:20.536 "adrfam": "ipv4", 00:15:20.536 "trsvcid": "4420", 00:15:20.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.536 "hdgst": false, 00:15:20.536 "ddgst": false 00:15:20.536 }, 00:15:20.536 "method": "bdev_nvme_attach_controller" 00:15:20.536 }' 00:15:20.798 [2024-05-15 16:59:59.399855] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:15:20.798 [2024-05-15 16:59:59.399927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425913 ] 00:15:20.798 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.798 [2024-05-15 16:59:59.465361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.798 [2024-05-15 16:59:59.539308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.058 Running I/O for 10 seconds... 00:15:31.066 00:15:31.066 Latency(us) 00:15:31.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.066 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:31.066 Verification LBA range: start 0x0 length 0x1000 00:15:31.066 Nvme1n1 : 10.01 9152.68 71.51 0.00 0.00 13932.29 1802.24 27197.44 00:15:31.066 =================================================================================================================== 00:15:31.066 Total : 9152.68 71.51 0.00 0.00 13932.29 1802.24 27197.44 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1428473 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:31.326 { 00:15:31.326 "params": { 00:15:31.326 "name": "Nvme$subsystem", 00:15:31.326 "trtype": "$TEST_TRANSPORT", 00:15:31.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:31.326 "adrfam": "ipv4", 00:15:31.326 "trsvcid": "$NVMF_PORT", 00:15:31.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:31.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:31.326 "hdgst": ${hdgst:-false}, 00:15:31.326 "ddgst": ${ddgst:-false} 00:15:31.326 }, 00:15:31.326 "method": "bdev_nvme_attach_controller" 00:15:31.326 } 00:15:31.326 EOF 00:15:31.326 )") 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:31.326 [2024-05-15 17:00:10.016401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-05-15 17:00:10.016429] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:31.326 17:00:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:31.326 "params": { 00:15:31.326 "name": "Nvme1", 00:15:31.326 "trtype": "tcp", 00:15:31.326 "traddr": "10.0.0.2", 00:15:31.326 "adrfam": "ipv4", 00:15:31.326 "trsvcid": "4420", 00:15:31.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.326 "hdgst": false, 00:15:31.326 "ddgst": false 00:15:31.326 }, 00:15:31.326 "method": "bdev_nvme_attach_controller" 00:15:31.326 }' 00:15:31.326 [2024-05-15 17:00:10.028397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-05-15 17:00:10.028407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-05-15 17:00:10.040425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-05-15 17:00:10.040433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-05-15 17:00:10.052457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-05-15 17:00:10.052466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-05-15 17:00:10.053967] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:15:31.326 [2024-05-15 17:00:10.054018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428473 ] 00:15:31.326 [2024-05-15 17:00:10.064487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-05-15 17:00:10.064497] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-05-15 17:00:10.076524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-05-15 17:00:10.076536] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.326 [2024-05-15 17:00:10.088552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-05-15 17:00:10.088561] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-05-15 17:00:10.100584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.326 [2024-05-15 17:00:10.100592] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.326 [2024-05-15 17:00:10.112611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-05-15 17:00:10.112619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-05-15 17:00:10.112959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.327 [2024-05-15 17:00:10.124641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-05-15 17:00:10.124653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-05-15 17:00:10.136670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-05-15 17:00:10.136679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.327 [2024-05-15 17:00:10.148699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.327 [2024-05-15 17:00:10.148709] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-05-15 17:00:10.160731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.586 [2024-05-15 17:00:10.160741] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.586 [2024-05-15 17:00:10.172761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.172770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.178232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.587 [2024-05-15 17:00:10.184791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.184800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.196828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.196842] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.208857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.208867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.220885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.220894] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.232918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.232926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.244945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.244953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.256985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.257000] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.269007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.269017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.281039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.281049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.293074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.293083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.305104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.305112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.317145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.317160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 Running I/O for 5 seconds... 00:15:31.587 [2024-05-15 17:00:10.331063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.331077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.344507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.344524] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.357113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.357130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.369951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.369966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.382648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.382665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.395218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.395234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.587 [2024-05-15 17:00:10.408369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.587 [2024-05-15 17:00:10.408389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.421620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.421635] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.434842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.434857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.447318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.447334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.460213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.460228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.473476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.473492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.486516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.486531] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.499639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.499654] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.513048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.513062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.526666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.526681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.539328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.539344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.552110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.552125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.564898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.564913] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.577281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.577296] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.590501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.590516] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.603311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.603326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.616264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.616280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.629338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.629353] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.642932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.642947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.655696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.655711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.846 [2024-05-15 17:00:10.668826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.846 [2024-05-15 17:00:10.668841] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.681854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.681870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.695112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.695126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.708293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.708308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.720998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.721013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.733946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.733961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.746998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.747013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.759463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.759477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.772785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.772800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.786292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.786306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.799389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.799404] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.812272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.812286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.825256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.825271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.838740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.838755] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.851092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.851107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.863964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.863978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.877267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.877282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.890574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.890589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.904083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.904098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.917365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.917380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.106 [2024-05-15 17:00:10.930653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.106 [2024-05-15 17:00:10.930669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:10.943355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:10.943370] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:10.956907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:10.956922] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:10.970022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:10.970036] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:10.983237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:10.983251] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:10.996772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:10.996787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.009642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.009658] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.022189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.022204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.035020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.035035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.048090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.048105] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.061396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.061411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.074799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.074813] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.088238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.088252] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.101409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.101423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.114877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.114892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.128585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.128600] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.141052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.141066] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.154227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.154243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.167487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.167502] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.180388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.180403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.367 [2024-05-15 17:00:11.193666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.367 [2024-05-15 17:00:11.193681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.207065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.207080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.220185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.220200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.233062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.233077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.245974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.245990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.259544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.259563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.272945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.272960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.286119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.286133] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.298543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.298561] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.311704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.311719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.324126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.324141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.336878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.336893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.350037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.350052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.362941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.362956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.375887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.375902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.388180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.388194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.400566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.400581] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.412659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.412673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.425710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.425725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.438747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.438761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.630 [2024-05-15 17:00:11.451776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.630 [2024-05-15 17:00:11.451791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.465237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.465252] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.478347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.478362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.490826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.490842] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.504259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.504275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.517536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.517556] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.530550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.530564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.543607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.543621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.557242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.557257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.570230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.570245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.583277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.583293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.595919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.595934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.608509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.608524] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.620902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.620917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.633961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.633980] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.647437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.647453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.659830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.659846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.672683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.672699] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.685753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.685768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.699203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.699219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.890 [2024-05-15 17:00:11.712679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.890 [2024-05-15 17:00:11.712695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.725979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.725995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.738777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.738792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.751705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.751720] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.764213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.764229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.777286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.777301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.790836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.790851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.803911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.803926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.816993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.817008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.830352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.830367] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.843738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.843754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.856716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.856732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.868839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.868854] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.882140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.882159] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.895083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.895098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.908246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.908262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.920846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.920862] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.934154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.934170] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.947396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.947412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.960095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.960111] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.150 [2024-05-15 17:00:11.973526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.150 [2024-05-15 17:00:11.973541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:11.986091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:11.986106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:11.999091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:11.999106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.012174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.012190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.025109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.025124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.038364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.038379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.051214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.051229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.063642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.063657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.076291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.076307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.088976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.088991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.102296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.102311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.115168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.115183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.128320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.128339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.141192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.141208] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.154262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.154277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.167974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.167989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.180261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.180276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.193328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.193343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.206344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.206359] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.219635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.219651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.409 [2024-05-15 17:00:12.232687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.409 [2024-05-15 17:00:12.232702] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.689 [2024-05-15 17:00:12.245575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.689 [2024-05-15 17:00:12.245590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.689 [2024-05-15 17:00:12.258709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.689 [2024-05-15 17:00:12.258724] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.689 [2024-05-15 17:00:12.272076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.689 [2024-05-15 17:00:12.272091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.689 [2024-05-15 17:00:12.285061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.689 [2024-05-15 17:00:12.285078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.689 [2024-05-15 17:00:12.297954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.297970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.310824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.310839] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.323279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.323295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.335824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.335840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.348578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.348594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.361020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.361035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.374170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.374188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.387518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.387534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.400315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.400330] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.413517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.413532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.426210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.426225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.439373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.439388] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.452926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.452941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.465459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.465474] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.479086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.479101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.491670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.491685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.690 [2024-05-15 17:00:12.504604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.690 [2024-05-15 17:00:12.504618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.517828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.517844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.530984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.531000] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.544033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.544047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.557364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.557379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.570723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.570738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.583387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.583402] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.596401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.596416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.609628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.609644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.622286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.622300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.634841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.634855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.647917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.647932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.660533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.660554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.673276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.673291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.686348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.686362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.699707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.699722] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.712612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.712626] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.725976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.725991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.739028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.739042] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.751917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.751931] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.764910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.764925] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.955 [2024-05-15 17:00:12.778417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.955 [2024-05-15 17:00:12.778431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.791689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.791704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.804692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.804706] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.817604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.817618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.831366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.831380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.844448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.844463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.857859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.857874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.870826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.870841] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.883573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.883587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.896761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.896776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.910050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.910064] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.922887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.922902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.936182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.936197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.949198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.949214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.962099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.962113] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.975681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.226 [2024-05-15 17:00:12.975695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.226 [2024-05-15 17:00:12.988261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.227 [2024-05-15 17:00:12.988275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.227 [2024-05-15 17:00:13.001045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.227 [2024-05-15 17:00:13.001060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.227 [2024-05-15 17:00:13.013907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.227 [2024-05-15 17:00:13.013922] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.227 [2024-05-15 17:00:13.026404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.227 [2024-05-15 17:00:13.026418] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.227 [2024-05-15 17:00:13.039627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.227 [2024-05-15 17:00:13.039642] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.227 [2024-05-15 17:00:13.052768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.227 [2024-05-15 17:00:13.052783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.066013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.066029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.079019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.079034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.092557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.092573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.106266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.106281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.119232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.119247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.132364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.132379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.145594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.145609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.158994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.159009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.172393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.172408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.185692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.185706] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.199098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.199113] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.212480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.212495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.225635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.225649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.238745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.238761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.251755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.251770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.264722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.264736] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.489 [2024-05-15 17:00:13.278048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.489 [2024-05-15 17:00:13.278062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.490 [2024-05-15 17:00:13.291040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.490 [2024-05-15 17:00:13.291055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.490 [2024-05-15 17:00:13.303840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.490 [2024-05-15 17:00:13.303854] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.490 [2024-05-15 17:00:13.315928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.490 [2024-05-15 17:00:13.315942] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.750 [2024-05-15 17:00:13.328922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.750 [2024-05-15 17:00:13.328937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.750 [2024-05-15 17:00:13.342446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.750 [2024-05-15 17:00:13.342460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.750 [2024-05-15 17:00:13.355746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.750 [2024-05-15 17:00:13.355761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.750 [2024-05-15 17:00:13.368817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.750 [2024-05-15 17:00:13.368832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.750 [2024-05-15 17:00:13.382166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.750 [2024-05-15 17:00:13.382182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.750 [2024-05-15 17:00:13.395255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.750 [2024-05-15 17:00:13.395269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.408693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.408708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.422015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.422031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.434785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.434801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.447857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.447873] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.461360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.461375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.474517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.474532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.487269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.487284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.500329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.500345] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.513350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.513365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.526420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.526435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.539723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.539738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.552271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.552286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.564857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.564872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.751 [2024-05-15 17:00:13.577689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.751 [2024-05-15 17:00:13.577705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.591108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.591123] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.604716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.604735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.617978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.617993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.630611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.630627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.643641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.643656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.656131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.656146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.669109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.669125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.682361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.682375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.695773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.695789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.708452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.708467] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.720572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.720587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.733316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.733331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.746363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.746378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.759130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.759145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.772373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.772388] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.785603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.785620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.798502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.798518] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.811787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.811802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.824618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.824633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.011 [2024-05-15 17:00:13.837640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.011 [2024-05-15 17:00:13.837655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.850780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.850799] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.863967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.863983] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.877312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.877327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.890877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.890893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.903509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.903525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.917023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.917038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.930044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.930059] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.942603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.942618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.955083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.955099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.967852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.967867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.981113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.981128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:13.994001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:13.994016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:14.006382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:14.006397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:14.019247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:14.019263] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:14.031278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:14.031293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:14.044754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:14.044770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:14.057467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:14.057483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:14.070033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:14.070048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:14.083065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:14.083080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.272 [2024-05-15 17:00:14.096011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.272 [2024-05-15 17:00:14.096029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.109101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.109116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.122275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.122289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.135753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.135768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.149140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.149154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.162114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.162128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.174433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.174448] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.187605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.187620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.201136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.201151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.214351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.214365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.227483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.227498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.240373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.240389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.253594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.253608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.266718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.266734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.280161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.280176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.293532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.293553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.307004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.307019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.319291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.319305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.331803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.331818] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.345067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.345086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.532 [2024-05-15 17:00:14.358470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.532 [2024-05-15 17:00:14.358485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.371951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.371967] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.384313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.384328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.397732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.397747] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.410845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.410861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.423914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.423929] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.437454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.437469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.450192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.450206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.463690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.463705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.477018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.477033] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.490183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.490197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.503560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.503576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.516526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.516541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.529464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.529479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.542599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.542614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.555974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.555989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.569646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.569662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.582697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.582711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.595930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.595945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.609441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.609456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.792 [2024-05-15 17:00:14.621815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.792 [2024-05-15 17:00:14.621829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.634693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.634708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.648159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.648174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.660325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.660340] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.673242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.673257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.686359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.686373] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.698888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.698903] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.711655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.711677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.724187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.724202] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.737414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.737428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.750603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.750618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.763799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.763814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.776362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.776377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.789307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.789322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.801507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.801521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.814854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.814868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.827475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.827489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.840629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.840643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.853751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.853765] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.866584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.866599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.053 [2024-05-15 17:00:14.879006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.053 [2024-05-15 17:00:14.879021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.313 [2024-05-15 17:00:14.891671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.313 [2024-05-15 17:00:14.891687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.313 [2024-05-15 17:00:14.903820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.313 [2024-05-15 17:00:14.903835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.313 [2024-05-15 17:00:14.916793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.313 [2024-05-15 17:00:14.916808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.313 [2024-05-15 17:00:14.929552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:14.929566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:14.942941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:14.942956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:14.955356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:14.955370] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:14.968602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:14.968617] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:14.981185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:14.981199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:14.993776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:14.993791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:15.007083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:15.007098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:15.020189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:15.020203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:15.032693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:15.032707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:15.045904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:15.045919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:15.058785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:15.058800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:15.071652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:15.071668] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:15.084747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:15.084763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:15.097906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:15.097921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:15.111265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:15.111281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:15.124634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:15.124650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.314 [2024-05-15 17:00:15.138042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.314 [2024-05-15 17:00:15.138058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.151103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.151118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.164043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.164058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.177147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.177162] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.190162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.190177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.203302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.203318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.215978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.215993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.229317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.229332] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.242565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.242580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.255689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.255705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.268838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.268853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.281549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.281565] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.294508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.294524] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.307745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.307760] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.320847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.320861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.333612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.333628] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 00:15:36.574 Latency(us) 00:15:36.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.574 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:36.574 Nvme1n1 : 5.00 19571.71 152.90 0.00 0.00 6533.44 2935.47 12779.52 00:15:36.574 =================================================================================================================== 00:15:36.574 Total : 19571.71 152.90 0.00 0.00 6533.44 2935.47 12779.52 00:15:36.574 [2024-05-15 17:00:15.343371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.343386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.355400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.355413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.367435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.367446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.379465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.574 [2024-05-15 17:00:15.379477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.574 [2024-05-15 17:00:15.391494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.575 [2024-05-15 17:00:15.391504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.575 [2024-05-15 17:00:15.403522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.575 [2024-05-15 17:00:15.403532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.835 [2024-05-15 17:00:15.415557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.835 [2024-05-15 17:00:15.415566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.835 [2024-05-15 17:00:15.427588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.835 [2024-05-15 17:00:15.427597] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.835 [2024-05-15 17:00:15.439617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.835 [2024-05-15 17:00:15.439644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.835 [2024-05-15 17:00:15.451651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.835 [2024-05-15 17:00:15.451662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.835 [2024-05-15 17:00:15.463677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.835 [2024-05-15 17:00:15.463687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.835 [2024-05-15 17:00:15.475706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.835 [2024-05-15 17:00:15.475715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1428473) - No such process 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1428473 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.835 delay0 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.835 17:00:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:36.835 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.835 [2024-05-15 17:00:15.615698] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:43.425 Initializing NVMe Controllers 00:15:43.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:43.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:43.425 Initialization complete. Launching workers. 00:15:43.425 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 296, failed: 9128 00:15:43.425 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9365, failed to submit 59 00:15:43.425 success 9257, unsuccess 108, failed 0 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.425 rmmod nvme_tcp 00:15:43.425 rmmod nvme_fabrics 00:15:43.425 rmmod nvme_keyring 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1425798 ']' 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1425798 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 1425798 ']' 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 1425798 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:43.425 17:00:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1425798 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1425798' 00:15:43.425 killing process with pid 1425798 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 1425798 00:15:43.425 [2024-05-15 17:00:22.012470] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 1425798 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.425 17:00:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.503 17:00:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.503 00:15:45.503 real 0m32.872s 00:15:45.503 user 0m44.541s 00:15:45.503 sys 0m10.115s 00:15:45.503 17:00:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:45.503 17:00:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:45.503 ************************************ 00:15:45.503 END TEST nvmf_zcopy 00:15:45.503 ************************************ 00:15:45.503 17:00:24 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:45.503 17:00:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:45.503 17:00:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:45.503 17:00:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.503 ************************************ 00:15:45.503 START TEST nvmf_nmic 00:15:45.503 ************************************ 00:15:45.503 17:00:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:45.771 * Looking for test storage... 00:15:45.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.771 17:00:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:52.360 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:52.360 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.360 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:52.361 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:52.361 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.361 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:52.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:15:52.622 00:15:52.622 --- 10.0.0.2 ping statistics --- 00:15:52.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.622 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:15:52.622 00:15:52.622 --- 10.0.0.1 ping statistics --- 00:15:52.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.622 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1435016 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1435016 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 1435016 ']' 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:52.622 17:00:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:52.882 [2024-05-15 17:00:31.475264] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:15:52.882 [2024-05-15 17:00:31.475328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.882 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.882 [2024-05-15 17:00:31.548074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.882 [2024-05-15 17:00:31.624100] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.882 [2024-05-15 17:00:31.624136] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.882 [2024-05-15 17:00:31.624144] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.882 [2024-05-15 17:00:31.624151] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.882 [2024-05-15 17:00:31.624156] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.882 [2024-05-15 17:00:31.624300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.882 [2024-05-15 17:00:31.624413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.882 [2024-05-15 17:00:31.624582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.882 [2024-05-15 17:00:31.624610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.453 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:53.453 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:15:53.453 17:00:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:53.453 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:53.453 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:53.713 [2024-05-15 17:00:32.302104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:53.713 Malloc0 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:53.713 [2024-05-15 17:00:32.361355] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:53.713 [2024-05-15 17:00:32.361579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:53.713 test case1: single bdev can't be used in multiple subsystems 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.713 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:53.713 [2024-05-15 17:00:32.397517] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:53.713 [2024-05-15 17:00:32.397534] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:53.713 [2024-05-15 17:00:32.397541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.713 request: 00:15:53.713 { 00:15:53.713 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:53.713 "namespace": { 00:15:53.713 "bdev_name": "Malloc0", 00:15:53.713 "no_auto_visible": false 00:15:53.713 }, 00:15:53.713 "method": "nvmf_subsystem_add_ns", 00:15:53.713 "req_id": 1 00:15:53.713 } 00:15:53.713 Got JSON-RPC error response 00:15:53.713 response: 00:15:53.713 { 00:15:53.713 "code": -32602, 00:15:53.714 "message": "Invalid parameters" 00:15:53.714 } 00:15:53.714 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:53.714 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:53.714 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:53.714 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:53.714 Adding namespace failed - expected result. 00:15:53.714 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:53.714 test case2: host connect to nvmf target in multiple paths 00:15:53.714 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:53.714 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.714 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:53.714 [2024-05-15 17:00:32.409633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:53.714 17:00:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.714 17:00:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.096 17:00:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:57.006 17:00:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:57.006 17:00:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:15:57.006 17:00:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.006 17:00:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:57.006 17:00:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:15:58.935 17:00:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:58.935 17:00:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:58.935 17:00:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:58.935 17:00:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:58.935 17:00:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:58.935 17:00:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:15:58.935 17:00:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:58.935 [global] 00:15:58.935 thread=1 00:15:58.935 invalidate=1 00:15:58.935 rw=write 00:15:58.935 time_based=1 00:15:58.935 runtime=1 00:15:58.935 ioengine=libaio 00:15:58.935 direct=1 00:15:58.935 bs=4096 00:15:58.935 iodepth=1 00:15:58.935 norandommap=0 00:15:58.935 numjobs=1 00:15:58.935 00:15:58.935 verify_dump=1 00:15:58.935 verify_backlog=512 00:15:58.935 verify_state_save=0 00:15:58.935 do_verify=1 00:15:58.935 verify=crc32c-intel 00:15:58.935 [job0] 00:15:58.935 filename=/dev/nvme0n1 00:15:58.935 Could not set queue depth (nvme0n1) 00:15:59.204 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:59.204 fio-3.35 00:15:59.204 Starting 1 thread 00:16:00.587 00:16:00.587 job0: (groupid=0, jobs=1): err= 0: pid=1436303: Wed May 15 17:00:39 2024 00:16:00.587 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:00.587 slat (nsec): min=6861, max=59396, avg=26191.01, stdev=3097.06 00:16:00.587 clat (usec): min=823, max=1273, avg=1054.87, stdev=68.26 00:16:00.587 lat (usec): min=830, max=1298, avg=1081.06, stdev=68.38 00:16:00.587 clat percentiles (usec): 00:16:00.587 | 1.00th=[ 873], 5.00th=[ 930], 10.00th=[ 971], 20.00th=[ 1004], 00:16:00.587 | 30.00th=[ 1029], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:16:00.587 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:16:00.587 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1270], 99.95th=[ 1270], 00:16:00.587 | 99.99th=[ 1270] 00:16:00.587 write: IOPS=698, BW=2793KiB/s (2860kB/s)(2796KiB/1001msec); 0 zone resets 00:16:00.587 slat (nsec): min=8665, max=69621, avg=26731.75, stdev=11124.33 00:16:00.587 clat (usec): min=262, max=972, avg=599.79, stdev=112.70 00:16:00.587 lat (usec): min=272, max=1005, avg=626.52, stdev=119.33 00:16:00.587 clat percentiles (usec): 00:16:00.587 | 1.00th=[ 318], 5.00th=[ 392], 10.00th=[ 445], 20.00th=[ 494], 00:16:00.587 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 644], 00:16:00.587 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 725], 95.00th=[ 766], 00:16:00.587 | 99.00th=[ 816], 99.50th=[ 824], 99.90th=[ 971], 99.95th=[ 971], 00:16:00.587 | 99.99th=[ 971] 00:16:00.587 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:00.587 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:00.587 lat (usec) : 500=12.39%, 750=41.29%, 1000=11.89% 00:16:00.587 lat (msec) : 2=34.43% 00:16:00.587 cpu : usr=2.90%, sys=3.90%, ctx=1211, majf=0, minf=1 00:16:00.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.587 issued rwts: total=512,699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.587 00:16:00.587 Run status group 0 (all jobs): 00:16:00.587 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:16:00.587 WRITE: bw=2793KiB/s (2860kB/s), 2793KiB/s-2793KiB/s (2860kB/s-2860kB/s), io=2796KiB (2863kB), run=1001-1001msec 00:16:00.587 00:16:00.587 Disk stats (read/write): 00:16:00.587 nvme0n1: ios=562/543, merge=0/0, ticks=573/265, in_queue=838, util=94.19% 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:00.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:00.587 rmmod nvme_tcp 00:16:00.587 rmmod nvme_fabrics 00:16:00.587 rmmod nvme_keyring 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1435016 ']' 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1435016 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 1435016 ']' 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 1435016 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1435016 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1435016' 00:16:00.587 killing process with pid 1435016 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 1435016 00:16:00.587 [2024-05-15 17:00:39.301858] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:00.587 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 1435016 00:16:00.847 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:00.847 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:00.847 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:00.847 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:00.847 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:00.847 17:00:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.847 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.847 17:00:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.764 17:00:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:02.764 00:16:02.764 real 0m17.273s 00:16:02.764 user 0m45.069s 00:16:02.764 sys 0m6.013s 00:16:02.764 17:00:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:02.764 17:00:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:02.764 ************************************ 00:16:02.764 END TEST nvmf_nmic 00:16:02.764 ************************************ 00:16:02.764 17:00:41 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:02.764 17:00:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:02.764 17:00:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:02.764 17:00:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.764 ************************************ 00:16:02.764 START TEST nvmf_fio_target 00:16:02.764 ************************************ 00:16:02.764 17:00:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:03.024 * Looking for test storage... 00:16:03.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:03.025 17:00:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:09.601 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:09.601 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:09.601 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:09.601 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.601 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:09.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:16:09.862 00:16:09.862 --- 10.0.0.2 ping statistics --- 00:16:09.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.862 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:16:09.862 00:16:09.862 --- 10.0.0.1 ping statistics --- 00:16:09.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.862 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:09.862 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1440772 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1440772 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 1440772 ']' 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:10.123 17:00:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.123 [2024-05-15 17:00:48.764017] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:16:10.123 [2024-05-15 17:00:48.764082] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.123 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.123 [2024-05-15 17:00:48.835838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.123 [2024-05-15 17:00:48.911015] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.123 [2024-05-15 17:00:48.911053] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.123 [2024-05-15 17:00:48.911061] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.123 [2024-05-15 17:00:48.911067] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.123 [2024-05-15 17:00:48.911073] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.123 [2024-05-15 17:00:48.911218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.123 [2024-05-15 17:00:48.911332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.123 [2024-05-15 17:00:48.911491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.123 [2024-05-15 17:00:48.911492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.755 17:00:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:10.755 17:00:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:16:10.755 17:00:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:10.755 17:00:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:10.755 17:00:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.016 17:00:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.016 17:00:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:11.016 [2024-05-15 17:00:49.726538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.016 17:00:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:11.277 17:00:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:11.277 17:00:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:11.277 17:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:11.277 17:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:11.537 17:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:11.537 17:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:11.798 17:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:11.798 17:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:12.059 17:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:12.059 17:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:12.059 17:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:12.320 17:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:12.320 17:00:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:12.580 17:00:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:12.580 17:00:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:12.580 17:00:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:12.840 17:00:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:12.840 17:00:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:12.840 17:00:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:12.840 17:00:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:13.100 17:00:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:13.360 [2024-05-15 17:00:51.963930] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:13.360 [2024-05-15 17:00:51.964212] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.360 17:00:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:13.360 17:00:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:13.620 17:00:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.532 17:00:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:15.532 17:00:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:16:15.532 17:00:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.532 17:00:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:16:15.532 17:00:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:16:15.532 17:00:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:16:17.467 17:00:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:17.467 17:00:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:17.467 17:00:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.467 17:00:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:16:17.467 17:00:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.467 17:00:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:16:17.467 17:00:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:17.467 [global] 00:16:17.467 thread=1 00:16:17.467 invalidate=1 00:16:17.467 rw=write 00:16:17.467 time_based=1 00:16:17.467 runtime=1 00:16:17.467 ioengine=libaio 00:16:17.467 direct=1 00:16:17.467 bs=4096 00:16:17.467 iodepth=1 00:16:17.467 norandommap=0 00:16:17.467 numjobs=1 00:16:17.467 00:16:17.467 verify_dump=1 00:16:17.467 verify_backlog=512 00:16:17.467 verify_state_save=0 00:16:17.467 do_verify=1 00:16:17.467 verify=crc32c-intel 00:16:17.467 [job0] 00:16:17.467 filename=/dev/nvme0n1 00:16:17.467 [job1] 00:16:17.467 filename=/dev/nvme0n2 00:16:17.467 [job2] 00:16:17.467 filename=/dev/nvme0n3 00:16:17.467 [job3] 00:16:17.467 filename=/dev/nvme0n4 00:16:17.467 Could not set queue depth (nvme0n1) 00:16:17.467 Could not set queue depth (nvme0n2) 00:16:17.467 Could not set queue depth (nvme0n3) 00:16:17.467 Could not set queue depth (nvme0n4) 00:16:17.731 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.731 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.731 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.731 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.731 fio-3.35 00:16:17.731 Starting 4 threads 00:16:19.138 00:16:19.138 job0: (groupid=0, jobs=1): err= 0: pid=1442475: Wed May 15 17:00:57 2024 00:16:19.138 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:19.138 slat (nsec): min=7177, max=60537, avg=25721.87, stdev=3403.09 00:16:19.138 clat (usec): min=577, max=1278, avg=1031.50, stdev=93.43 00:16:19.138 lat (usec): min=584, max=1304, avg=1057.22, stdev=93.68 00:16:19.138 clat percentiles (usec): 00:16:19.138 | 1.00th=[ 725], 5.00th=[ 840], 10.00th=[ 914], 20.00th=[ 979], 00:16:19.138 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1057], 00:16:19.138 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1123], 95.00th=[ 1139], 00:16:19.138 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1287], 99.95th=[ 1287], 00:16:19.138 | 99.99th=[ 1287] 00:16:19.138 write: IOPS=689, BW=2757KiB/s (2823kB/s)(2760KiB/1001msec); 0 zone resets 00:16:19.138 slat (nsec): min=9478, max=56950, avg=28583.50, stdev=10505.48 00:16:19.138 clat (usec): min=301, max=1059, avg=623.02, stdev=117.93 00:16:19.138 lat (usec): min=337, max=1099, avg=651.60, stdev=122.71 00:16:19.138 clat percentiles (usec): 00:16:19.138 | 1.00th=[ 375], 5.00th=[ 416], 10.00th=[ 465], 20.00th=[ 519], 00:16:19.138 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:16:19.138 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 807], 00:16:19.138 | 99.00th=[ 889], 99.50th=[ 1004], 99.90th=[ 1057], 99.95th=[ 1057], 00:16:19.138 | 99.99th=[ 1057] 00:16:19.138 bw ( KiB/s): min= 4096, max= 4096, per=47.80%, avg=4096.00, stdev= 0.00, samples=1 00:16:19.138 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:19.138 lat (usec) : 500=9.07%, 750=42.01%, 1000=17.30% 00:16:19.138 lat (msec) : 2=31.61% 00:16:19.138 cpu : usr=1.70%, sys=3.50%, ctx=1204, majf=0, minf=1 00:16:19.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.138 issued rwts: total=512,690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.138 job1: (groupid=0, jobs=1): err= 0: pid=1442476: Wed May 15 17:00:57 2024 00:16:19.138 read: IOPS=436, BW=1748KiB/s (1790kB/s)(1816KiB/1039msec) 00:16:19.138 slat (nsec): min=6540, max=62204, avg=23259.46, stdev=6926.38 00:16:19.138 clat (usec): min=364, max=42046, avg=1652.32, stdev=6054.95 00:16:19.138 lat (usec): min=385, max=42073, avg=1675.58, stdev=6055.62 00:16:19.138 clat percentiles (usec): 00:16:19.138 | 1.00th=[ 408], 5.00th=[ 502], 10.00th=[ 578], 20.00th=[ 652], 00:16:19.138 | 30.00th=[ 685], 40.00th=[ 725], 50.00th=[ 766], 60.00th=[ 799], 00:16:19.138 | 70.00th=[ 824], 80.00th=[ 857], 90.00th=[ 898], 95.00th=[ 947], 00:16:19.138 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:19.138 | 99.99th=[42206] 00:16:19.138 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:16:19.138 slat (nsec): min=9755, max=81721, avg=31839.22, stdev=8003.04 00:16:19.138 clat (usec): min=161, max=755, avg=495.82, stdev=122.95 00:16:19.138 lat (usec): min=171, max=805, avg=527.66, stdev=126.12 00:16:19.138 clat percentiles (usec): 00:16:19.138 | 1.00th=[ 188], 5.00th=[ 273], 10.00th=[ 302], 20.00th=[ 396], 00:16:19.138 | 30.00th=[ 437], 40.00th=[ 482], 50.00th=[ 510], 60.00th=[ 537], 00:16:19.138 | 70.00th=[ 578], 80.00th=[ 603], 90.00th=[ 644], 95.00th=[ 676], 00:16:19.138 | 99.00th=[ 717], 99.50th=[ 750], 99.90th=[ 758], 99.95th=[ 758], 00:16:19.138 | 99.99th=[ 758] 00:16:19.138 bw ( KiB/s): min= 4096, max= 4096, per=47.80%, avg=4096.00, stdev= 0.00, samples=1 00:16:19.138 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:19.138 lat (usec) : 250=1.55%, 500=24.64%, 750=48.14%, 1000=24.33% 00:16:19.138 lat (msec) : 2=0.31%, 50=1.04% 00:16:19.138 cpu : usr=1.35%, sys=2.79%, ctx=967, majf=0, minf=1 00:16:19.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.138 issued rwts: total=454,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.138 job2: (groupid=0, jobs=1): err= 0: pid=1442477: Wed May 15 17:00:57 2024 00:16:19.138 read: IOPS=16, BW=66.4KiB/s (68.0kB/s)(68.0KiB/1024msec) 00:16:19.138 slat (nsec): min=25154, max=26215, avg=25463.24, stdev=264.34 00:16:19.138 clat (usec): min=1021, max=42019, avg=39522.85, stdev=9922.52 00:16:19.138 lat (usec): min=1047, max=42044, avg=39548.32, stdev=9922.47 00:16:19.138 clat percentiles (usec): 00:16:19.138 | 1.00th=[ 1020], 5.00th=[ 1020], 10.00th=[41157], 20.00th=[41681], 00:16:19.138 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:19.138 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:19.138 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:19.138 | 99.99th=[42206] 00:16:19.138 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:16:19.138 slat (nsec): min=9753, max=66818, avg=29640.10, stdev=9389.53 00:16:19.138 clat (usec): min=279, max=910, avg=650.18, stdev=106.68 00:16:19.138 lat (usec): min=290, max=943, avg=679.82, stdev=111.11 00:16:19.138 clat percentiles (usec): 00:16:19.138 | 1.00th=[ 379], 5.00th=[ 433], 10.00th=[ 502], 20.00th=[ 586], 00:16:19.138 | 30.00th=[ 611], 40.00th=[ 635], 50.00th=[ 652], 60.00th=[ 685], 00:16:19.138 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 766], 95.00th=[ 799], 00:16:19.138 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 914], 99.95th=[ 914], 00:16:19.138 | 99.99th=[ 914] 00:16:19.138 bw ( KiB/s): min= 4096, max= 4096, per=47.80%, avg=4096.00, stdev= 0.00, samples=1 00:16:19.138 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:19.138 lat (usec) : 500=9.64%, 750=72.97%, 1000=14.18% 00:16:19.138 lat (msec) : 2=0.19%, 50=3.02% 00:16:19.138 cpu : usr=0.68%, sys=1.47%, ctx=530, majf=0, minf=1 00:16:19.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.138 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.138 job3: (groupid=0, jobs=1): err= 0: pid=1442478: Wed May 15 17:00:57 2024 00:16:19.138 read: IOPS=14, BW=59.9KiB/s (61.4kB/s)(60.0KiB/1001msec) 00:16:19.138 slat (nsec): min=10164, max=30920, avg=25782.27, stdev=4471.17 00:16:19.138 clat (usec): min=1068, max=42010, avg=39137.80, stdev=10534.36 00:16:19.138 lat (usec): min=1078, max=42036, avg=39163.58, stdev=10538.69 00:16:19.138 clat percentiles (usec): 00:16:19.138 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41157], 00:16:19.138 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:19.138 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:19.138 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:19.138 | 99.99th=[42206] 00:16:19.138 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:19.138 slat (usec): min=9, max=43766, avg=188.90, stdev=2495.13 00:16:19.138 clat (usec): min=164, max=1042, avg=611.51, stdev=128.83 00:16:19.138 lat (usec): min=197, max=44397, avg=800.41, stdev=2500.59 00:16:19.138 clat percentiles (usec): 00:16:19.138 | 1.00th=[ 306], 5.00th=[ 400], 10.00th=[ 441], 20.00th=[ 502], 00:16:19.138 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 660], 00:16:19.138 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 807], 00:16:19.138 | 99.00th=[ 922], 99.50th=[ 955], 99.90th=[ 1045], 99.95th=[ 1045], 00:16:19.138 | 99.99th=[ 1045] 00:16:19.138 bw ( KiB/s): min= 4096, max= 4096, per=47.80%, avg=4096.00, stdev= 0.00, samples=1 00:16:19.138 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:19.138 lat (usec) : 250=0.19%, 500=18.98%, 750=66.22%, 1000=11.57% 00:16:19.138 lat (msec) : 2=0.38%, 50=2.66% 00:16:19.138 cpu : usr=1.10%, sys=2.10%, ctx=530, majf=0, minf=1 00:16:19.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.139 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.139 00:16:19.139 Run status group 0 (all jobs): 00:16:19.139 READ: bw=3842KiB/s (3934kB/s), 59.9KiB/s-2046KiB/s (61.4kB/s-2095kB/s), io=3992KiB (4088kB), run=1001-1039msec 00:16:19.139 WRITE: bw=8570KiB/s (8775kB/s), 1971KiB/s-2757KiB/s (2018kB/s-2823kB/s), io=8904KiB (9118kB), run=1001-1039msec 00:16:19.139 00:16:19.139 Disk stats (read/write): 00:16:19.139 nvme0n1: ios=503/512, merge=0/0, ticks=583/313, in_queue=896, util=86.97% 00:16:19.139 nvme0n2: ios=498/512, merge=0/0, ticks=1365/232, in_queue=1597, util=87.96% 00:16:19.139 nvme0n3: ios=76/512, merge=0/0, ticks=579/321, in_queue=900, util=95.14% 00:16:19.139 nvme0n4: ios=65/512, merge=0/0, ticks=1025/239, in_queue=1264, util=97.12% 00:16:19.139 17:00:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:19.139 [global] 00:16:19.139 thread=1 00:16:19.139 invalidate=1 00:16:19.139 rw=randwrite 00:16:19.139 time_based=1 00:16:19.139 runtime=1 00:16:19.139 ioengine=libaio 00:16:19.139 direct=1 00:16:19.139 bs=4096 00:16:19.139 iodepth=1 00:16:19.139 norandommap=0 00:16:19.139 numjobs=1 00:16:19.139 00:16:19.139 verify_dump=1 00:16:19.139 verify_backlog=512 00:16:19.139 verify_state_save=0 00:16:19.139 do_verify=1 00:16:19.139 verify=crc32c-intel 00:16:19.139 [job0] 00:16:19.139 filename=/dev/nvme0n1 00:16:19.139 [job1] 00:16:19.139 filename=/dev/nvme0n2 00:16:19.139 [job2] 00:16:19.139 filename=/dev/nvme0n3 00:16:19.139 [job3] 00:16:19.139 filename=/dev/nvme0n4 00:16:19.139 Could not set queue depth (nvme0n1) 00:16:19.139 Could not set queue depth (nvme0n2) 00:16:19.139 Could not set queue depth (nvme0n3) 00:16:19.139 Could not set queue depth (nvme0n4) 00:16:19.399 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.399 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.399 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.399 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.399 fio-3.35 00:16:19.399 Starting 4 threads 00:16:20.805 00:16:20.805 job0: (groupid=0, jobs=1): err= 0: pid=1442995: Wed May 15 17:00:59 2024 00:16:20.805 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:20.805 slat (nsec): min=9870, max=60252, avg=25187.54, stdev=3268.31 00:16:20.805 clat (usec): min=603, max=1271, avg=989.43, stdev=85.17 00:16:20.805 lat (usec): min=628, max=1296, avg=1014.62, stdev=85.25 00:16:20.805 clat percentiles (usec): 00:16:20.805 | 1.00th=[ 775], 5.00th=[ 840], 10.00th=[ 898], 20.00th=[ 930], 00:16:20.805 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1012], 00:16:20.805 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:16:20.805 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1270], 99.95th=[ 1270], 00:16:20.805 | 99.99th=[ 1270] 00:16:20.805 write: IOPS=736, BW=2945KiB/s (3016kB/s)(2948KiB/1001msec); 0 zone resets 00:16:20.805 slat (nsec): min=9337, max=68583, avg=28000.17, stdev=9371.18 00:16:20.805 clat (usec): min=257, max=935, avg=605.66, stdev=115.96 00:16:20.805 lat (usec): min=268, max=966, avg=633.66, stdev=119.87 00:16:20.805 clat percentiles (usec): 00:16:20.805 | 1.00th=[ 326], 5.00th=[ 404], 10.00th=[ 453], 20.00th=[ 510], 00:16:20.805 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 627], 00:16:20.805 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 799], 00:16:20.805 | 99.00th=[ 881], 99.50th=[ 906], 99.90th=[ 938], 99.95th=[ 938], 00:16:20.805 | 99.99th=[ 938] 00:16:20.805 bw ( KiB/s): min= 4096, max= 4096, per=38.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:20.805 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:20.805 lat (usec) : 500=9.77%, 750=42.99%, 1000=29.06% 00:16:20.805 lat (msec) : 2=18.17% 00:16:20.805 cpu : usr=2.00%, sys=3.30%, ctx=1250, majf=0, minf=1 00:16:20.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.805 issued rwts: total=512,737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.805 job1: (groupid=0, jobs=1): err= 0: pid=1442996: Wed May 15 17:00:59 2024 00:16:20.805 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:16:20.805 slat (nsec): min=24846, max=26247, avg=25432.28, stdev=426.85 00:16:20.805 clat (usec): min=1282, max=42053, avg=39526.21, stdev=9551.89 00:16:20.805 lat (usec): min=1307, max=42079, avg=39551.64, stdev=9551.91 00:16:20.805 clat percentiles (usec): 00:16:20.805 | 1.00th=[ 1287], 5.00th=[ 1287], 10.00th=[41157], 20.00th=[41157], 00:16:20.805 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:16:20.805 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:20.805 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:20.805 | 99.99th=[42206] 00:16:20.805 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:16:20.805 slat (usec): min=8, max=240, avg=29.28, stdev=11.90 00:16:20.805 clat (usec): min=277, max=870, avg=602.82, stdev=114.38 00:16:20.805 lat (usec): min=286, max=1028, avg=632.10, stdev=118.33 00:16:20.805 clat percentiles (usec): 00:16:20.805 | 1.00th=[ 302], 5.00th=[ 404], 10.00th=[ 449], 20.00th=[ 506], 00:16:20.805 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:16:20.805 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 775], 00:16:20.805 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 873], 99.95th=[ 873], 00:16:20.805 | 99.99th=[ 873] 00:16:20.805 bw ( KiB/s): min= 4096, max= 4096, per=38.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:20.805 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:20.805 lat (usec) : 500=18.11%, 750=69.25%, 1000=9.25% 00:16:20.805 lat (msec) : 2=0.19%, 50=3.21% 00:16:20.805 cpu : usr=1.06%, sys=1.92%, ctx=531, majf=0, minf=1 00:16:20.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.805 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.805 job2: (groupid=0, jobs=1): err= 0: pid=1442997: Wed May 15 17:00:59 2024 00:16:20.805 read: IOPS=16, BW=66.7KiB/s (68.3kB/s)(68.0KiB/1019msec) 00:16:20.805 slat (nsec): min=25375, max=26013, avg=25617.29, stdev=146.05 00:16:20.805 clat (usec): min=1283, max=42172, avg=39473.58, stdev=9844.48 00:16:20.805 lat (usec): min=1309, max=42198, avg=39499.20, stdev=9844.52 00:16:20.805 clat percentiles (usec): 00:16:20.805 | 1.00th=[ 1287], 5.00th=[ 1287], 10.00th=[41157], 20.00th=[41681], 00:16:20.805 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:16:20.805 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:20.805 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:20.805 | 99.99th=[42206] 00:16:20.805 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:16:20.805 slat (nsec): min=9456, max=61911, avg=27413.08, stdev=9740.99 00:16:20.805 clat (usec): min=294, max=1060, avg=635.91, stdev=133.35 00:16:20.805 lat (usec): min=305, max=1093, avg=663.33, stdev=136.02 00:16:20.805 clat percentiles (usec): 00:16:20.806 | 1.00th=[ 322], 5.00th=[ 400], 10.00th=[ 461], 20.00th=[ 529], 00:16:20.806 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 685], 00:16:20.806 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 840], 00:16:20.806 | 99.00th=[ 906], 99.50th=[ 996], 99.90th=[ 1057], 99.95th=[ 1057], 00:16:20.806 | 99.99th=[ 1057] 00:16:20.806 bw ( KiB/s): min= 4096, max= 4096, per=38.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:20.806 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:20.806 lat (usec) : 500=14.56%, 750=61.44%, 1000=20.42% 00:16:20.806 lat (msec) : 2=0.57%, 50=3.02% 00:16:20.806 cpu : usr=0.49%, sys=1.57%, ctx=532, majf=0, minf=1 00:16:20.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.806 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.806 job3: (groupid=0, jobs=1): err= 0: pid=1442998: Wed May 15 17:00:59 2024 00:16:20.806 read: IOPS=507, BW=2032KiB/s (2080kB/s)(2056KiB/1012msec) 00:16:20.806 slat (nsec): min=6751, max=59343, avg=23185.36, stdev=6311.68 00:16:20.806 clat (usec): min=355, max=41448, avg=944.01, stdev=2521.95 00:16:20.806 lat (usec): min=362, max=41457, avg=967.20, stdev=2521.61 00:16:20.806 clat percentiles (usec): 00:16:20.806 | 1.00th=[ 453], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 685], 00:16:20.806 | 30.00th=[ 734], 40.00th=[ 766], 50.00th=[ 799], 60.00th=[ 840], 00:16:20.806 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 930], 95.00th=[ 955], 00:16:20.806 | 99.00th=[ 1037], 99.50th=[ 1057], 99.90th=[41681], 99.95th=[41681], 00:16:20.806 | 99.99th=[41681] 00:16:20.806 write: IOPS=1011, BW=4047KiB/s (4145kB/s)(4096KiB/1012msec); 0 zone resets 00:16:20.806 slat (nsec): min=8988, max=59970, avg=27784.74, stdev=8482.19 00:16:20.806 clat (usec): min=176, max=905, avg=462.94, stdev=119.59 00:16:20.806 lat (usec): min=187, max=921, avg=490.73, stdev=122.69 00:16:20.806 clat percentiles (usec): 00:16:20.806 | 1.00th=[ 233], 5.00th=[ 265], 10.00th=[ 326], 20.00th=[ 355], 00:16:20.806 | 30.00th=[ 388], 40.00th=[ 429], 50.00th=[ 457], 60.00th=[ 486], 00:16:20.806 | 70.00th=[ 523], 80.00th=[ 562], 90.00th=[ 619], 95.00th=[ 676], 00:16:20.806 | 99.00th=[ 758], 99.50th=[ 791], 99.90th=[ 865], 99.95th=[ 906], 00:16:20.806 | 99.99th=[ 906] 00:16:20.806 bw ( KiB/s): min= 4096, max= 4096, per=38.24%, avg=4096.00, stdev= 0.00, samples=2 00:16:20.806 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:16:20.806 lat (usec) : 250=1.95%, 500=41.74%, 750=33.94%, 1000=21.78% 00:16:20.806 lat (msec) : 2=0.46%, 50=0.13% 00:16:20.806 cpu : usr=2.18%, sys=4.15%, ctx=1538, majf=0, minf=1 00:16:20.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.806 issued rwts: total=514,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.806 00:16:20.806 Run status group 0 (all jobs): 00:16:20.806 READ: bw=4081KiB/s (4179kB/s), 66.7KiB/s-2046KiB/s (68.3kB/s-2095kB/s), io=4244KiB (4346kB), run=1001-1040msec 00:16:20.806 WRITE: bw=10.5MiB/s (11.0MB/s), 1969KiB/s-4047KiB/s (2016kB/s-4145kB/s), io=10.9MiB (11.4MB), run=1001-1040msec 00:16:20.806 00:16:20.806 Disk stats (read/write): 00:16:20.806 nvme0n1: ios=542/512, merge=0/0, ticks=826/290, in_queue=1116, util=88.98% 00:16:20.806 nvme0n2: ios=63/512, merge=0/0, ticks=797/239, in_queue=1036, util=95.51% 00:16:20.806 nvme0n3: ios=44/512, merge=0/0, ticks=863/314, in_queue=1177, util=96.31% 00:16:20.806 nvme0n4: ios=569/829, merge=0/0, ticks=526/361, in_queue=887, util=96.70% 00:16:20.806 17:00:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:20.806 [global] 00:16:20.806 thread=1 00:16:20.806 invalidate=1 00:16:20.806 rw=write 00:16:20.806 time_based=1 00:16:20.806 runtime=1 00:16:20.806 ioengine=libaio 00:16:20.806 direct=1 00:16:20.806 bs=4096 00:16:20.806 iodepth=128 00:16:20.806 norandommap=0 00:16:20.806 numjobs=1 00:16:20.806 00:16:20.806 verify_dump=1 00:16:20.806 verify_backlog=512 00:16:20.806 verify_state_save=0 00:16:20.806 do_verify=1 00:16:20.806 verify=crc32c-intel 00:16:20.806 [job0] 00:16:20.806 filename=/dev/nvme0n1 00:16:20.806 [job1] 00:16:20.806 filename=/dev/nvme0n2 00:16:20.806 [job2] 00:16:20.806 filename=/dev/nvme0n3 00:16:20.806 [job3] 00:16:20.806 filename=/dev/nvme0n4 00:16:20.806 Could not set queue depth (nvme0n1) 00:16:20.806 Could not set queue depth (nvme0n2) 00:16:20.806 Could not set queue depth (nvme0n3) 00:16:20.806 Could not set queue depth (nvme0n4) 00:16:21.066 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.066 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.066 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.066 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.066 fio-3.35 00:16:21.066 Starting 4 threads 00:16:22.463 00:16:22.463 job0: (groupid=0, jobs=1): err= 0: pid=1443517: Wed May 15 17:01:00 2024 00:16:22.463 read: IOPS=8188, BW=32.0MiB/s (33.5MB/s)(32.2MiB/1007msec) 00:16:22.463 slat (nsec): min=875, max=7632.0k, avg=59243.33, stdev=413466.33 00:16:22.463 clat (usec): min=1890, max=26988, avg=7882.75, stdev=2401.02 00:16:22.463 lat (usec): min=1928, max=26990, avg=7942.00, stdev=2427.22 00:16:22.463 clat percentiles (usec): 00:16:22.463 | 1.00th=[ 3621], 5.00th=[ 4686], 10.00th=[ 5276], 20.00th=[ 6128], 00:16:22.463 | 30.00th=[ 6783], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 8029], 00:16:22.463 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[10814], 95.00th=[11994], 00:16:22.463 | 99.00th=[15270], 99.50th=[19006], 99.90th=[24249], 99.95th=[24249], 00:16:22.463 | 99.99th=[26870] 00:16:22.463 write: IOPS=8643, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1007msec); 0 zone resets 00:16:22.463 slat (nsec): min=1620, max=6637.4k, avg=53847.88, stdev=320881.80 00:16:22.463 clat (usec): min=1139, max=26986, avg=7189.74, stdev=3568.34 00:16:22.463 lat (usec): min=1149, max=26989, avg=7243.59, stdev=3584.93 00:16:22.463 clat percentiles (usec): 00:16:22.463 | 1.00th=[ 2180], 5.00th=[ 3359], 10.00th=[ 4015], 20.00th=[ 4817], 00:16:22.463 | 30.00th=[ 5538], 40.00th=[ 5866], 50.00th=[ 6259], 60.00th=[ 7046], 00:16:22.463 | 70.00th=[ 7570], 80.00th=[ 8356], 90.00th=[11207], 95.00th=[15401], 00:16:22.463 | 99.00th=[20579], 99.50th=[21890], 99.90th=[22414], 99.95th=[22414], 00:16:22.463 | 99.99th=[26870] 00:16:22.463 bw ( KiB/s): min=32184, max=36864, per=34.64%, avg=34524.00, stdev=3309.26, samples=2 00:16:22.463 iops : min= 8046, max= 9216, avg=8631.00, stdev=827.31, samples=2 00:16:22.463 lat (msec) : 2=0.30%, 4=5.52%, 10=80.48%, 20=12.74%, 50=0.97% 00:16:22.463 cpu : usr=5.57%, sys=7.95%, ctx=785, majf=0, minf=1 00:16:22.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:22.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.463 issued rwts: total=8246,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.463 job1: (groupid=0, jobs=1): err= 0: pid=1443518: Wed May 15 17:01:00 2024 00:16:22.463 read: IOPS=4252, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1005msec) 00:16:22.463 slat (nsec): min=1387, max=19407k, avg=136331.39, stdev=1037842.42 00:16:22.463 clat (usec): min=1299, max=48190, avg=17571.92, stdev=9055.48 00:16:22.463 lat (usec): min=1310, max=48215, avg=17708.25, stdev=9144.54 00:16:22.463 clat percentiles (usec): 00:16:22.463 | 1.00th=[ 3949], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[10290], 00:16:22.463 | 30.00th=[11338], 40.00th=[12518], 50.00th=[13566], 60.00th=[16057], 00:16:22.463 | 70.00th=[22676], 80.00th=[26608], 90.00th=[31065], 95.00th=[35390], 00:16:22.463 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[47449], 00:16:22.463 | 99.99th=[47973] 00:16:22.463 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:16:22.463 slat (nsec): min=1612, max=13329k, avg=78775.58, stdev=517086.50 00:16:22.463 clat (usec): min=1116, max=38911, avg=11162.45, stdev=4388.35 00:16:22.463 lat (usec): min=1150, max=38921, avg=11241.23, stdev=4411.19 00:16:22.463 clat percentiles (usec): 00:16:22.463 | 1.00th=[ 3490], 5.00th=[ 5211], 10.00th=[ 7046], 20.00th=[ 8029], 00:16:22.463 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[11338], 00:16:22.463 | 70.00th=[12387], 80.00th=[14746], 90.00th=[15533], 95.00th=[17957], 00:16:22.463 | 99.00th=[26870], 99.50th=[27657], 99.90th=[27919], 99.95th=[28181], 00:16:22.463 | 99.99th=[39060] 00:16:22.463 bw ( KiB/s): min=16384, max=20480, per=18.50%, avg=18432.00, stdev=2896.31, samples=2 00:16:22.463 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:16:22.463 lat (msec) : 2=0.70%, 4=0.87%, 10=31.74%, 20=48.65%, 50=18.05% 00:16:22.463 cpu : usr=3.09%, sys=4.98%, ctx=330, majf=0, minf=1 00:16:22.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:22.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.463 issued rwts: total=4274,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.463 job2: (groupid=0, jobs=1): err= 0: pid=1443519: Wed May 15 17:01:00 2024 00:16:22.463 read: IOPS=4162, BW=16.3MiB/s (17.0MB/s)(16.3MiB/1003msec) 00:16:22.464 slat (nsec): min=945, max=12026k, avg=98284.39, stdev=694817.40 00:16:22.464 clat (usec): min=1177, max=41014, avg=11812.32, stdev=4852.14 00:16:22.464 lat (usec): min=2653, max=41022, avg=11910.60, stdev=4914.81 00:16:22.464 clat percentiles (usec): 00:16:22.464 | 1.00th=[ 5080], 5.00th=[ 7308], 10.00th=[ 7898], 20.00th=[ 8455], 00:16:22.464 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11076], 00:16:22.464 | 70.00th=[12256], 80.00th=[14746], 90.00th=[18220], 95.00th=[21890], 00:16:22.464 | 99.00th=[28967], 99.50th=[36439], 99.90th=[41157], 99.95th=[41157], 00:16:22.464 | 99.99th=[41157] 00:16:22.464 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:16:22.464 slat (nsec): min=1620, max=9624.7k, avg=122878.86, stdev=668100.74 00:16:22.464 clat (usec): min=970, max=78838, avg=16896.10, stdev=15810.19 00:16:22.464 lat (usec): min=978, max=78847, avg=17018.98, stdev=15920.35 00:16:22.464 clat percentiles (usec): 00:16:22.464 | 1.00th=[ 4359], 5.00th=[ 5669], 10.00th=[ 6783], 20.00th=[ 8455], 00:16:22.464 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[11731], 60.00th=[14091], 00:16:22.464 | 70.00th=[15139], 80.00th=[16581], 90.00th=[37487], 95.00th=[62129], 00:16:22.464 | 99.00th=[74974], 99.50th=[78119], 99.90th=[79168], 99.95th=[79168], 00:16:22.464 | 99.99th=[79168] 00:16:22.464 bw ( KiB/s): min=17744, max=18736, per=18.30%, avg=18240.00, stdev=701.45, samples=2 00:16:22.464 iops : min= 4436, max= 4684, avg=4560.00, stdev=175.36, samples=2 00:16:22.464 lat (usec) : 1000=0.03% 00:16:22.464 lat (msec) : 2=0.11%, 4=0.42%, 10=43.36%, 20=44.54%, 50=7.86% 00:16:22.464 lat (msec) : 100=3.68% 00:16:22.464 cpu : usr=3.29%, sys=4.79%, ctx=388, majf=0, minf=1 00:16:22.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:22.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.464 issued rwts: total=4175,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.464 job3: (groupid=0, jobs=1): err= 0: pid=1443520: Wed May 15 17:01:00 2024 00:16:22.464 read: IOPS=6761, BW=26.4MiB/s (27.7MB/s)(26.5MiB/1003msec) 00:16:22.464 slat (nsec): min=980, max=10662k, avg=66950.77, stdev=514184.75 00:16:22.464 clat (usec): min=1122, max=26410, avg=9266.97, stdev=2746.37 00:16:22.464 lat (usec): min=2927, max=26425, avg=9333.92, stdev=2781.68 00:16:22.464 clat percentiles (usec): 00:16:22.464 | 1.00th=[ 3752], 5.00th=[ 5735], 10.00th=[ 7111], 20.00th=[ 7635], 00:16:22.464 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8848], 00:16:22.464 | 70.00th=[ 9634], 80.00th=[10683], 90.00th=[13042], 95.00th=[14877], 00:16:22.464 | 99.00th=[17695], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:16:22.464 | 99.99th=[26346] 00:16:22.464 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:16:22.464 slat (nsec): min=1680, max=13973k, avg=60587.56, stdev=464955.32 00:16:22.464 clat (usec): min=324, max=49804, avg=8719.99, stdev=6487.86 00:16:22.464 lat (usec): min=358, max=49811, avg=8780.57, stdev=6536.15 00:16:22.464 clat percentiles (usec): 00:16:22.464 | 1.00th=[ 1680], 5.00th=[ 3654], 10.00th=[ 4883], 20.00th=[ 5997], 00:16:22.464 | 30.00th=[ 6783], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 7898], 00:16:22.464 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[11207], 95.00th=[16712], 00:16:22.464 | 99.00th=[46924], 99.50th=[47973], 99.90th=[49021], 99.95th=[49546], 00:16:22.464 | 99.99th=[49546] 00:16:22.464 bw ( KiB/s): min=26304, max=31024, per=28.76%, avg=28664.00, stdev=3337.54, samples=2 00:16:22.464 iops : min= 6576, max= 7756, avg=7166.00, stdev=834.39, samples=2 00:16:22.464 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.15% 00:16:22.464 lat (msec) : 2=0.50%, 4=3.55%, 10=75.71%, 20=17.47%, 50=2.58% 00:16:22.464 cpu : usr=5.09%, sys=8.28%, ctx=448, majf=0, minf=1 00:16:22.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:16:22.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.464 issued rwts: total=6782,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.464 00:16:22.464 Run status group 0 (all jobs): 00:16:22.464 READ: bw=91.1MiB/s (95.5MB/s), 16.3MiB/s-32.0MiB/s (17.0MB/s-33.5MB/s), io=91.7MiB (96.2MB), run=1003-1007msec 00:16:22.464 WRITE: bw=97.3MiB/s (102MB/s), 17.9MiB/s-33.8MiB/s (18.8MB/s-35.4MB/s), io=98.0MiB (103MB), run=1003-1007msec 00:16:22.464 00:16:22.464 Disk stats (read/write): 00:16:22.464 nvme0n1: ios=7697/7694, merge=0/0, ticks=55302/45359, in_queue=100661, util=88.98% 00:16:22.464 nvme0n2: ios=3324/3584, merge=0/0, ticks=34158/22808, in_queue=56966, util=96.94% 00:16:22.464 nvme0n3: ios=3128/3584, merge=0/0, ticks=35749/66881, in_queue=102630, util=91.89% 00:16:22.464 nvme0n4: ios=5691/6035, merge=0/0, ticks=47759/51188, in_queue=98947, util=99.68% 00:16:22.464 17:01:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:22.464 [global] 00:16:22.464 thread=1 00:16:22.464 invalidate=1 00:16:22.464 rw=randwrite 00:16:22.464 time_based=1 00:16:22.464 runtime=1 00:16:22.464 ioengine=libaio 00:16:22.464 direct=1 00:16:22.464 bs=4096 00:16:22.464 iodepth=128 00:16:22.464 norandommap=0 00:16:22.464 numjobs=1 00:16:22.464 00:16:22.464 verify_dump=1 00:16:22.464 verify_backlog=512 00:16:22.464 verify_state_save=0 00:16:22.464 do_verify=1 00:16:22.464 verify=crc32c-intel 00:16:22.464 [job0] 00:16:22.464 filename=/dev/nvme0n1 00:16:22.464 [job1] 00:16:22.464 filename=/dev/nvme0n2 00:16:22.464 [job2] 00:16:22.464 filename=/dev/nvme0n3 00:16:22.464 [job3] 00:16:22.464 filename=/dev/nvme0n4 00:16:22.464 Could not set queue depth (nvme0n1) 00:16:22.464 Could not set queue depth (nvme0n2) 00:16:22.464 Could not set queue depth (nvme0n3) 00:16:22.464 Could not set queue depth (nvme0n4) 00:16:22.731 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.731 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.731 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.731 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.731 fio-3.35 00:16:22.731 Starting 4 threads 00:16:24.145 00:16:24.145 job0: (groupid=0, jobs=1): err= 0: pid=1444037: Wed May 15 17:01:02 2024 00:16:24.146 read: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec) 00:16:24.146 slat (nsec): min=933, max=7487.3k, avg=62978.53, stdev=440464.06 00:16:24.146 clat (usec): min=2938, max=18757, avg=8297.50, stdev=2270.47 00:16:24.146 lat (usec): min=2943, max=18759, avg=8360.48, stdev=2293.84 00:16:24.146 clat percentiles (usec): 00:16:24.146 | 1.00th=[ 3523], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 6718], 00:16:24.146 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7767], 60.00th=[ 8160], 00:16:24.146 | 70.00th=[ 8848], 80.00th=[ 9896], 90.00th=[11731], 95.00th=[12780], 00:16:24.146 | 99.00th=[15926], 99.50th=[16188], 99.90th=[17171], 99.95th=[18744], 00:16:24.146 | 99.99th=[18744] 00:16:24.146 write: IOPS=8426, BW=32.9MiB/s (34.5MB/s)(33.1MiB/1005msec); 0 zone resets 00:16:24.146 slat (nsec): min=1616, max=5762.6k, avg=52562.97, stdev=303921.42 00:16:24.146 clat (usec): min=1344, max=18758, avg=7021.22, stdev=2515.36 00:16:24.146 lat (usec): min=1355, max=18762, avg=7073.78, stdev=2523.77 00:16:24.146 clat percentiles (usec): 00:16:24.146 | 1.00th=[ 2376], 5.00th=[ 3523], 10.00th=[ 4080], 20.00th=[ 5080], 00:16:24.146 | 30.00th=[ 5932], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 6980], 00:16:24.146 | 70.00th=[ 7308], 80.00th=[ 8586], 90.00th=[10552], 95.00th=[12649], 00:16:24.146 | 99.00th=[13829], 99.50th=[14746], 99.90th=[17171], 99.95th=[17171], 00:16:24.146 | 99.99th=[18744] 00:16:24.146 bw ( KiB/s): min=32768, max=33960, per=34.62%, avg=33364.00, stdev=842.87, samples=2 00:16:24.146 iops : min= 8192, max= 8490, avg=8341.00, stdev=210.72, samples=2 00:16:24.146 lat (msec) : 2=0.21%, 4=5.17%, 10=78.79%, 20=15.83% 00:16:24.146 cpu : usr=5.08%, sys=8.17%, ctx=780, majf=0, minf=1 00:16:24.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:24.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.146 issued rwts: total=8192,8469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.146 job1: (groupid=0, jobs=1): err= 0: pid=1444039: Wed May 15 17:01:02 2024 00:16:24.146 read: IOPS=5072, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1006msec) 00:16:24.146 slat (nsec): min=931, max=13910k, avg=91453.04, stdev=736113.28 00:16:24.146 clat (usec): min=1233, max=33543, avg=12064.88, stdev=5634.33 00:16:24.146 lat (usec): min=2150, max=34317, avg=12156.33, stdev=5687.18 00:16:24.146 clat percentiles (usec): 00:16:24.146 | 1.00th=[ 4113], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 7439], 00:16:24.146 | 30.00th=[ 8291], 40.00th=[10028], 50.00th=[10945], 60.00th=[11994], 00:16:24.146 | 70.00th=[13042], 80.00th=[16712], 90.00th=[19268], 95.00th=[23462], 00:16:24.146 | 99.00th=[32900], 99.50th=[33162], 99.90th=[33424], 99.95th=[33424], 00:16:24.146 | 99.99th=[33424] 00:16:24.146 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:16:24.146 slat (nsec): min=1560, max=35047k, avg=93192.16, stdev=797876.50 00:16:24.146 clat (usec): min=716, max=65142, avg=12896.67, stdev=9655.57 00:16:24.146 lat (usec): min=887, max=65152, avg=12989.86, stdev=9707.57 00:16:24.146 clat percentiles (usec): 00:16:24.146 | 1.00th=[ 2606], 5.00th=[ 4752], 10.00th=[ 5800], 20.00th=[ 6521], 00:16:24.146 | 30.00th=[ 7701], 40.00th=[ 8586], 50.00th=[10028], 60.00th=[11207], 00:16:24.146 | 70.00th=[12649], 80.00th=[16450], 90.00th=[26870], 95.00th=[35390], 00:16:24.146 | 99.00th=[52691], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:16:24.146 | 99.99th=[65274] 00:16:24.146 bw ( KiB/s): min=20480, max=20480, per=21.25%, avg=20480.00, stdev= 0.00, samples=2 00:16:24.146 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:16:24.146 lat (usec) : 750=0.01%, 1000=0.01% 00:16:24.146 lat (msec) : 2=0.31%, 4=1.72%, 10=41.98%, 20=44.65%, 50=10.73% 00:16:24.146 lat (msec) : 100=0.58% 00:16:24.146 cpu : usr=3.38%, sys=5.57%, ctx=317, majf=0, minf=1 00:16:24.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:24.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.146 issued rwts: total=5103,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.146 job2: (groupid=0, jobs=1): err= 0: pid=1444043: Wed May 15 17:01:02 2024 00:16:24.146 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:16:24.146 slat (nsec): min=884, max=12290k, avg=91579.50, stdev=630014.93 00:16:24.146 clat (usec): min=4106, max=36790, avg=11820.92, stdev=3932.05 00:16:24.146 lat (usec): min=4112, max=36799, avg=11912.50, stdev=3979.81 00:16:24.146 clat percentiles (usec): 00:16:24.146 | 1.00th=[ 6587], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 9110], 00:16:24.146 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10290], 60.00th=[10945], 00:16:24.146 | 70.00th=[12780], 80.00th=[14746], 90.00th=[17695], 95.00th=[20055], 00:16:24.146 | 99.00th=[24773], 99.50th=[27395], 99.90th=[27657], 99.95th=[29230], 00:16:24.146 | 99.99th=[36963] 00:16:24.146 write: IOPS=6023, BW=23.5MiB/s (24.7MB/s)(23.6MiB/1003msec); 0 zone resets 00:16:24.146 slat (nsec): min=1479, max=16728k, avg=72439.00, stdev=553513.41 00:16:24.146 clat (usec): min=1199, max=48688, avg=10027.96, stdev=5725.71 00:16:24.147 lat (usec): min=1211, max=48693, avg=10100.40, stdev=5749.17 00:16:24.147 clat percentiles (usec): 00:16:24.147 | 1.00th=[ 2933], 5.00th=[ 4555], 10.00th=[ 6194], 20.00th=[ 7701], 00:16:24.147 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[ 9634], 00:16:24.147 | 70.00th=[10028], 80.00th=[10683], 90.00th=[13960], 95.00th=[18482], 00:16:24.147 | 99.00th=[47973], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:16:24.147 | 99.99th=[48497] 00:16:24.147 bw ( KiB/s): min=21200, max=26120, per=24.55%, avg=23660.00, stdev=3478.97, samples=2 00:16:24.147 iops : min= 5300, max= 6530, avg=5915.00, stdev=869.74, samples=2 00:16:24.147 lat (msec) : 2=0.16%, 4=1.52%, 10=54.14%, 20=39.49%, 50=4.69% 00:16:24.147 cpu : usr=3.49%, sys=4.79%, ctx=554, majf=0, minf=1 00:16:24.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:24.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.147 issued rwts: total=5632,6042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.147 job3: (groupid=0, jobs=1): err= 0: pid=1444044: Wed May 15 17:01:02 2024 00:16:24.147 read: IOPS=4197, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1005msec) 00:16:24.147 slat (nsec): min=944, max=16664k, avg=124544.12, stdev=1000220.20 00:16:24.147 clat (usec): min=5922, max=57034, avg=17202.20, stdev=10593.22 00:16:24.147 lat (usec): min=5994, max=57040, avg=17326.74, stdev=10665.65 00:16:24.147 clat percentiles (usec): 00:16:24.147 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10814], 00:16:24.147 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12780], 00:16:24.147 | 70.00th=[15664], 80.00th=[27132], 90.00th=[33162], 95.00th=[38011], 00:16:24.147 | 99.00th=[54789], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:16:24.147 | 99.99th=[56886] 00:16:24.147 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:16:24.147 slat (nsec): min=1585, max=16104k, avg=90335.71, stdev=637442.54 00:16:24.147 clat (usec): min=1268, max=32075, avg=11921.13, stdev=5297.27 00:16:24.147 lat (usec): min=1277, max=33129, avg=12011.46, stdev=5342.91 00:16:24.147 clat percentiles (usec): 00:16:24.147 | 1.00th=[ 2376], 5.00th=[ 5276], 10.00th=[ 6652], 20.00th=[ 8225], 00:16:24.147 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10945], 60.00th=[11076], 00:16:24.147 | 70.00th=[12518], 80.00th=[15533], 90.00th=[20317], 95.00th=[23200], 00:16:24.147 | 99.00th=[28967], 99.50th=[29492], 99.90th=[32113], 99.95th=[32113], 00:16:24.147 | 99.99th=[32113] 00:16:24.147 bw ( KiB/s): min=16344, max=20480, per=19.10%, avg=18412.00, stdev=2924.59, samples=2 00:16:24.147 iops : min= 4086, max= 5120, avg=4603.00, stdev=731.15, samples=2 00:16:24.147 lat (msec) : 2=0.32%, 4=1.04%, 10=26.54%, 20=54.10%, 50=16.60% 00:16:24.147 lat (msec) : 100=1.40% 00:16:24.147 cpu : usr=3.39%, sys=4.68%, ctx=325, majf=0, minf=1 00:16:24.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:24.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.147 issued rwts: total=4218,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.147 00:16:24.147 Run status group 0 (all jobs): 00:16:24.147 READ: bw=89.9MiB/s (94.2MB/s), 16.4MiB/s-31.8MiB/s (17.2MB/s-33.4MB/s), io=90.4MiB (94.8MB), run=1003-1006msec 00:16:24.147 WRITE: bw=94.1MiB/s (98.7MB/s), 17.9MiB/s-32.9MiB/s (18.8MB/s-34.5MB/s), io=94.7MiB (99.3MB), run=1003-1006msec 00:16:24.147 00:16:24.147 Disk stats (read/write): 00:16:24.147 nvme0n1: ios=6679/7159, merge=0/0, ticks=52982/48953, in_queue=101935, util=96.49% 00:16:24.147 nvme0n2: ios=4586/4608, merge=0/0, ticks=42119/42108, in_queue=84227, util=96.44% 00:16:24.147 nvme0n3: ios=4655/4623, merge=0/0, ticks=40892/32535, in_queue=73427, util=99.79% 00:16:24.147 nvme0n4: ios=3633/3860, merge=0/0, ticks=26335/20727, in_queue=47062, util=98.08% 00:16:24.147 17:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:24.147 17:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1444309 00:16:24.147 17:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:24.147 17:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:24.147 [global] 00:16:24.147 thread=1 00:16:24.147 invalidate=1 00:16:24.147 rw=read 00:16:24.147 time_based=1 00:16:24.147 runtime=10 00:16:24.147 ioengine=libaio 00:16:24.147 direct=1 00:16:24.147 bs=4096 00:16:24.147 iodepth=1 00:16:24.147 norandommap=1 00:16:24.147 numjobs=1 00:16:24.147 00:16:24.147 [job0] 00:16:24.147 filename=/dev/nvme0n1 00:16:24.147 [job1] 00:16:24.147 filename=/dev/nvme0n2 00:16:24.147 [job2] 00:16:24.147 filename=/dev/nvme0n3 00:16:24.147 [job3] 00:16:24.147 filename=/dev/nvme0n4 00:16:24.147 Could not set queue depth (nvme0n1) 00:16:24.147 Could not set queue depth (nvme0n2) 00:16:24.147 Could not set queue depth (nvme0n3) 00:16:24.147 Could not set queue depth (nvme0n4) 00:16:24.417 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.417 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.417 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.417 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.417 fio-3.35 00:16:24.417 Starting 4 threads 00:16:26.959 17:01:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:27.220 17:01:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:27.220 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=421888, buflen=4096 00:16:27.220 fio: pid=1444556, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:27.220 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=5808128, buflen=4096 00:16:27.220 fio: pid=1444555, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:27.220 17:01:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:27.220 17:01:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:27.480 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=12619776, buflen=4096 00:16:27.480 fio: pid=1444553, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:27.480 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:27.480 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:27.480 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11382784, buflen=4096 00:16:27.480 fio: pid=1444554, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:27.741 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:27.741 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:27.741 00:16:27.741 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1444553: Wed May 15 17:01:06 2024 00:16:27.741 read: IOPS=1052, BW=4210KiB/s (4312kB/s)(12.0MiB/2927msec) 00:16:27.741 slat (usec): min=6, max=27945, avg=37.04, stdev=554.18 00:16:27.741 clat (usec): min=356, max=6720, avg=900.07, stdev=170.36 00:16:27.741 lat (usec): min=381, max=28930, avg=937.11, stdev=582.58 00:16:27.741 clat percentiles (usec): 00:16:27.741 | 1.00th=[ 537], 5.00th=[ 660], 10.00th=[ 734], 20.00th=[ 791], 00:16:27.741 | 30.00th=[ 840], 40.00th=[ 873], 50.00th=[ 906], 60.00th=[ 938], 00:16:27.741 | 70.00th=[ 979], 80.00th=[ 1012], 90.00th=[ 1057], 95.00th=[ 1106], 00:16:27.741 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1303], 99.95th=[ 1336], 00:16:27.741 | 99.99th=[ 6718] 00:16:27.741 bw ( KiB/s): min= 4224, max= 4432, per=45.25%, avg=4321.60, stdev=75.98, samples=5 00:16:27.741 iops : min= 1056, max= 1108, avg=1080.40, stdev=18.99, samples=5 00:16:27.741 lat (usec) : 500=0.29%, 750=11.88%, 1000=64.54% 00:16:27.741 lat (msec) : 2=23.23%, 10=0.03% 00:16:27.741 cpu : usr=1.03%, sys=3.08%, ctx=3084, majf=0, minf=1 00:16:27.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.741 issued rwts: total=3082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.741 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1444554: Wed May 15 17:01:06 2024 00:16:27.742 read: IOPS=899, BW=3595KiB/s (3681kB/s)(10.9MiB/3092msec) 00:16:27.742 slat (usec): min=5, max=30898, avg=66.42, stdev=963.30 00:16:27.742 clat (usec): min=272, max=42021, avg=1032.83, stdev=3182.12 00:16:27.742 lat (usec): min=279, max=42047, avg=1099.26, stdev=3324.16 00:16:27.742 clat percentiles (usec): 00:16:27.742 | 1.00th=[ 469], 5.00th=[ 545], 10.00th=[ 578], 20.00th=[ 635], 00:16:27.742 | 30.00th=[ 668], 40.00th=[ 709], 50.00th=[ 742], 60.00th=[ 775], 00:16:27.742 | 70.00th=[ 799], 80.00th=[ 914], 90.00th=[ 1090], 95.00th=[ 1123], 00:16:27.742 | 99.00th=[ 1270], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:16:27.742 | 99.99th=[42206] 00:16:27.742 bw ( KiB/s): min= 96, max= 5600, per=37.85%, avg=3614.83, stdev=2160.69, samples=6 00:16:27.742 iops : min= 24, max= 1400, avg=903.67, stdev=540.18, samples=6 00:16:27.742 lat (usec) : 500=1.94%, 750=50.50%, 1000=30.25% 00:16:27.742 lat (msec) : 2=16.47%, 4=0.04%, 10=0.07%, 20=0.07%, 50=0.61% 00:16:27.742 cpu : usr=0.91%, sys=3.72%, ctx=2788, majf=0, minf=1 00:16:27.742 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.742 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.742 issued rwts: total=2780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.742 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.742 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1444555: Wed May 15 17:01:06 2024 00:16:27.742 read: IOPS=516, BW=2064KiB/s (2114kB/s)(5672KiB/2748msec) 00:16:27.742 slat (usec): min=6, max=21592, avg=51.78, stdev=707.21 00:16:27.742 clat (usec): min=451, max=42110, avg=1865.00, stdev=6104.86 00:16:27.742 lat (usec): min=459, max=42137, avg=1916.80, stdev=6143.03 00:16:27.742 clat percentiles (usec): 00:16:27.742 | 1.00th=[ 529], 5.00th=[ 603], 10.00th=[ 676], 20.00th=[ 742], 00:16:27.742 | 30.00th=[ 816], 40.00th=[ 889], 50.00th=[ 988], 60.00th=[ 1037], 00:16:27.742 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1172], 00:16:27.742 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:27.742 | 99.99th=[42206] 00:16:27.742 bw ( KiB/s): min= 96, max= 4664, per=21.45%, avg=2048.00, stdev=2026.44, samples=5 00:16:27.742 iops : min= 24, max= 1166, avg=512.00, stdev=506.61, samples=5 00:16:27.742 lat (usec) : 500=0.49%, 750=20.23%, 1000=31.64% 00:16:27.742 lat (msec) : 2=45.17%, 10=0.07%, 50=2.33% 00:16:27.742 cpu : usr=0.87%, sys=2.00%, ctx=1422, majf=0, minf=1 00:16:27.742 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.742 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.742 issued rwts: total=1419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.742 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.742 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1444556: Wed May 15 17:01:06 2024 00:16:27.742 read: IOPS=39, BW=158KiB/s (162kB/s)(412KiB/2606msec) 00:16:27.742 slat (nsec): min=9995, max=45471, avg=26149.67, stdev=3757.96 00:16:27.742 clat (usec): min=547, max=41967, avg=25064.93, stdev=19810.85 00:16:27.742 lat (usec): min=573, max=41993, avg=25091.08, stdev=19810.42 00:16:27.742 clat percentiles (usec): 00:16:27.742 | 1.00th=[ 586], 5.00th=[ 635], 10.00th=[ 734], 20.00th=[ 824], 00:16:27.742 | 30.00th=[ 881], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:16:27.742 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:27.742 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:27.742 | 99.99th=[42206] 00:16:27.742 bw ( KiB/s): min= 96, max= 408, per=1.68%, avg=160.00, stdev=138.68, samples=5 00:16:27.742 iops : min= 24, max= 102, avg=40.00, stdev=34.67, samples=5 00:16:27.742 lat (usec) : 750=13.46%, 1000=20.19% 00:16:27.742 lat (msec) : 2=5.77%, 50=59.62% 00:16:27.742 cpu : usr=0.00%, sys=0.19%, ctx=104, majf=0, minf=2 00:16:27.742 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.742 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.742 issued rwts: total=104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.742 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.742 00:16:27.742 Run status group 0 (all jobs): 00:16:27.742 READ: bw=9549KiB/s (9778kB/s), 158KiB/s-4210KiB/s (162kB/s-4312kB/s), io=28.8MiB (30.2MB), run=2606-3092msec 00:16:27.742 00:16:27.742 Disk stats (read/write): 00:16:27.742 nvme0n1: ios=3006/0, merge=0/0, ticks=2684/0, in_queue=2684, util=93.39% 00:16:27.742 nvme0n2: ios=2780/0, merge=0/0, ticks=2620/0, in_queue=2620, util=91.94% 00:16:27.742 nvme0n3: ios=1327/0, merge=0/0, ticks=2389/0, in_queue=2389, util=96.03% 00:16:27.742 nvme0n4: ios=103/0, merge=0/0, ticks=2578/0, in_queue=2578, util=96.42% 00:16:27.742 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:27.742 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:28.009 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.010 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:28.010 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.010 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:28.269 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.269 17:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1444309 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:28.529 nvmf hotplug test: fio failed as expected 00:16:28.529 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.791 rmmod nvme_tcp 00:16:28.791 rmmod nvme_fabrics 00:16:28.791 rmmod nvme_keyring 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1440772 ']' 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1440772 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 1440772 ']' 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 1440772 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1440772 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1440772' 00:16:28.791 killing process with pid 1440772 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 1440772 00:16:28.791 [2024-05-15 17:01:07.562074] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:28.791 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 1440772 00:16:29.052 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:29.052 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:29.052 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:29.052 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.052 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.052 17:01:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.052 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.052 17:01:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.961 17:01:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:30.961 00:16:30.961 real 0m28.203s 00:16:30.961 user 2m41.718s 00:16:30.961 sys 0m8.980s 00:16:30.961 17:01:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:30.961 17:01:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.961 ************************************ 00:16:30.961 END TEST nvmf_fio_target 00:16:30.961 ************************************ 00:16:31.220 17:01:09 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:31.220 17:01:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:31.220 17:01:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:31.220 17:01:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:31.220 ************************************ 00:16:31.220 START TEST nvmf_bdevio 00:16:31.220 ************************************ 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:31.220 * Looking for test storage... 00:16:31.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.220 17:01:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.221 17:01:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:39.373 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:39.373 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:39.373 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:39.373 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.373 17:01:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:39.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:16:39.374 00:16:39.374 --- 10.0.0.2 ping statistics --- 00:16:39.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.374 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:16:39.374 00:16:39.374 --- 10.0.0.1 ping statistics --- 00:16:39.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.374 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1449528 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1449528 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 1449528 ']' 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:39.374 [2024-05-15 17:01:17.115681] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:16:39.374 [2024-05-15 17:01:17.115744] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.374 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.374 [2024-05-15 17:01:17.204805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.374 [2024-05-15 17:01:17.299905] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.374 [2024-05-15 17:01:17.299960] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.374 [2024-05-15 17:01:17.299968] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.374 [2024-05-15 17:01:17.299975] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.374 [2024-05-15 17:01:17.299981] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.374 [2024-05-15 17:01:17.300143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:39.374 [2024-05-15 17:01:17.300298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:39.374 [2024-05-15 17:01:17.300459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.374 [2024-05-15 17:01:17.300459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:39.374 [2024-05-15 17:01:17.962752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:39.374 Malloc0 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:39.374 17:01:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:39.374 [2024-05-15 17:01:18.011749] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:39.374 [2024-05-15 17:01:18.012051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:39.374 { 00:16:39.374 "params": { 00:16:39.374 "name": "Nvme$subsystem", 00:16:39.374 "trtype": "$TEST_TRANSPORT", 00:16:39.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:39.374 "adrfam": "ipv4", 00:16:39.374 "trsvcid": "$NVMF_PORT", 00:16:39.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:39.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:39.374 "hdgst": ${hdgst:-false}, 00:16:39.374 "ddgst": ${ddgst:-false} 00:16:39.374 }, 00:16:39.374 "method": "bdev_nvme_attach_controller" 00:16:39.374 } 00:16:39.374 EOF 00:16:39.374 )") 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:39.374 17:01:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:39.374 "params": { 00:16:39.374 "name": "Nvme1", 00:16:39.374 "trtype": "tcp", 00:16:39.374 "traddr": "10.0.0.2", 00:16:39.374 "adrfam": "ipv4", 00:16:39.374 "trsvcid": "4420", 00:16:39.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:39.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:39.374 "hdgst": false, 00:16:39.374 "ddgst": false 00:16:39.374 }, 00:16:39.374 "method": "bdev_nvme_attach_controller" 00:16:39.374 }' 00:16:39.374 [2024-05-15 17:01:18.075143] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:16:39.374 [2024-05-15 17:01:18.075226] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449611 ] 00:16:39.374 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.374 [2024-05-15 17:01:18.143123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:39.634 [2024-05-15 17:01:18.220709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.634 [2024-05-15 17:01:18.220828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.634 [2024-05-15 17:01:18.220831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.634 I/O targets: 00:16:39.634 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:39.634 00:16:39.634 00:16:39.634 CUnit - A unit testing framework for C - Version 2.1-3 00:16:39.634 http://cunit.sourceforge.net/ 00:16:39.634 00:16:39.634 00:16:39.634 Suite: bdevio tests on: Nvme1n1 00:16:39.634 Test: blockdev write read block ...passed 00:16:39.894 Test: blockdev write zeroes read block ...passed 00:16:39.894 Test: blockdev write zeroes read no split ...passed 00:16:39.894 Test: blockdev write zeroes read split ...passed 00:16:39.894 Test: blockdev write zeroes read split partial ...passed 00:16:39.894 Test: blockdev reset ...[2024-05-15 17:01:18.575953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:39.894 [2024-05-15 17:01:18.576013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd19a60 (9): Bad file descriptor 00:16:39.894 [2024-05-15 17:01:18.634214] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:39.894 passed 00:16:39.894 Test: blockdev write read 8 blocks ...passed 00:16:39.894 Test: blockdev write read size > 128k ...passed 00:16:39.894 Test: blockdev write read invalid size ...passed 00:16:39.894 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:39.894 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:39.894 Test: blockdev write read max offset ...passed 00:16:40.154 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:40.154 Test: blockdev writev readv 8 blocks ...passed 00:16:40.154 Test: blockdev writev readv 30 x 1block ...passed 00:16:40.154 Test: blockdev writev readv block ...passed 00:16:40.154 Test: blockdev writev readv size > 128k ...passed 00:16:40.154 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:40.154 Test: blockdev comparev and writev ...[2024-05-15 17:01:18.859076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.154 [2024-05-15 17:01:18.859101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.154 [2024-05-15 17:01:18.859111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.154 [2024-05-15 17:01:18.859117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:40.154 [2024-05-15 17:01:18.859627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.154 [2024-05-15 17:01:18.859636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:40.154 [2024-05-15 17:01:18.859645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.155 [2024-05-15 17:01:18.859650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:40.155 [2024-05-15 17:01:18.860108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.155 [2024-05-15 17:01:18.860115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:40.155 [2024-05-15 17:01:18.860124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.155 [2024-05-15 17:01:18.860129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:40.155 [2024-05-15 17:01:18.860471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.155 [2024-05-15 17:01:18.860478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:40.155 [2024-05-15 17:01:18.860488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.155 [2024-05-15 17:01:18.860493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:40.155 passed 00:16:40.155 Test: blockdev nvme passthru rw ...passed 00:16:40.155 Test: blockdev nvme passthru vendor specific ...[2024-05-15 17:01:18.946461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.155 [2024-05-15 17:01:18.946474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:40.155 [2024-05-15 17:01:18.946840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.155 [2024-05-15 17:01:18.946847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:40.155 [2024-05-15 17:01:18.947236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.155 [2024-05-15 17:01:18.947244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:40.155 [2024-05-15 17:01:18.947594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.155 [2024-05-15 17:01:18.947601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:40.155 passed 00:16:40.155 Test: blockdev nvme admin passthru ...passed 00:16:40.417 Test: blockdev copy ...passed 00:16:40.417 00:16:40.417 Run Summary: Type Total Ran Passed Failed Inactive 00:16:40.417 suites 1 1 n/a 0 0 00:16:40.417 tests 23 23 23 0 0 00:16:40.417 asserts 152 152 152 0 n/a 00:16:40.417 00:16:40.417 Elapsed time = 1.209 seconds 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:40.417 rmmod nvme_tcp 00:16:40.417 rmmod nvme_fabrics 00:16:40.417 rmmod nvme_keyring 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1449528 ']' 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1449528 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 1449528 ']' 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 1449528 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1449528 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1449528' 00:16:40.417 killing process with pid 1449528 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 1449528 00:16:40.417 [2024-05-15 17:01:19.247841] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:40.417 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 1449528 00:16:40.734 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:40.734 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:40.735 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:40.735 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.735 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:40.735 17:01:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.735 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.735 17:01:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.665 17:01:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:42.665 00:16:42.665 real 0m11.628s 00:16:42.665 user 0m12.403s 00:16:42.665 sys 0m5.877s 00:16:42.665 17:01:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:42.665 17:01:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:42.665 ************************************ 00:16:42.665 END TEST nvmf_bdevio 00:16:42.665 ************************************ 00:16:42.665 17:01:21 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:42.665 17:01:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:42.665 17:01:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:42.665 17:01:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:42.665 ************************************ 00:16:42.665 START TEST nvmf_auth_target 00:16:42.665 ************************************ 00:16:42.665 17:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:42.927 * Looking for test storage... 00:16:42.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:42.927 17:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:49.515 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:49.515 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:49.515 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:49.515 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.515 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:49.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.733 ms 00:16:49.775 00:16:49.775 --- 10.0.0.2 ping statistics --- 00:16:49.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.775 rtt min/avg/max/mdev = 0.733/0.733/0.733/0.000 ms 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:16:49.775 00:16:49.775 --- 10.0.0.1 ping statistics --- 00:16:49.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.775 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:49.775 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:50.035 17:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1453891 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1453891 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1453891 ']' 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:50.036 17:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=1454204 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b7f8b2569db55a23ff3016ad6b057a495e7dd8583f1813d5 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ut6 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b7f8b2569db55a23ff3016ad6b057a495e7dd8583f1813d5 0 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b7f8b2569db55a23ff3016ad6b057a495e7dd8583f1813d5 0 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b7f8b2569db55a23ff3016ad6b057a495e7dd8583f1813d5 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ut6 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ut6 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.ut6 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=99873a4ae062d69ef9ee5eef53783582 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pPD 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 99873a4ae062d69ef9ee5eef53783582 1 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 99873a4ae062d69ef9ee5eef53783582 1 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=99873a4ae062d69ef9ee5eef53783582 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pPD 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pPD 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.pPD 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2a31bb871cbc28942a299acf8d089138459303ad589ab682 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KB2 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2a31bb871cbc28942a299acf8d089138459303ad589ab682 2 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2a31bb871cbc28942a299acf8d089138459303ad589ab682 2 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:50.975 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2a31bb871cbc28942a299acf8d089138459303ad589ab682 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KB2 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KB2 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.KB2 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3a92fbddc7be1d8d579d96cdab013aff25aea16bf65db7bcb674268d023c9e7b 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.pCE 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3a92fbddc7be1d8d579d96cdab013aff25aea16bf65db7bcb674268d023c9e7b 3 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3a92fbddc7be1d8d579d96cdab013aff25aea16bf65db7bcb674268d023c9e7b 3 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3a92fbddc7be1d8d579d96cdab013aff25aea16bf65db7bcb674268d023c9e7b 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.pCE 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.pCE 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.pCE 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 1453891 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1453891 ']' 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:50.976 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.237 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:51.237 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:16:51.237 17:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 1454204 /var/tmp/host.sock 00:16:51.237 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1454204 ']' 00:16:51.237 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:16:51.237 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:51.237 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:51.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:51.237 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:51.237 17:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.237 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:51.237 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ut6 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ut6 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ut6 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pPD 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.pPD 00:16:51.498 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.pPD 00:16:51.758 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:16:51.758 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KB2 00:16:51.758 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.758 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.758 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.758 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.KB2 00:16:51.758 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.KB2 00:16:52.018 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:16:52.018 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.pCE 00:16:52.018 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.018 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.018 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.018 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.pCE 00:16:52.018 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.pCE 00:16:52.019 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:16:52.019 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.019 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:52.019 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.019 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.279 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:16:52.279 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:52.279 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.279 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:52.279 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:52.279 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:16:52.279 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.279 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.280 17:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.280 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:52.280 17:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:52.280 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:52.541 { 00:16:52.541 "cntlid": 1, 00:16:52.541 "qid": 0, 00:16:52.541 "state": "enabled", 00:16:52.541 "listen_address": { 00:16:52.541 "trtype": "TCP", 00:16:52.541 "adrfam": "IPv4", 00:16:52.541 "traddr": "10.0.0.2", 00:16:52.541 "trsvcid": "4420" 00:16:52.541 }, 00:16:52.541 "peer_address": { 00:16:52.541 "trtype": "TCP", 00:16:52.541 "adrfam": "IPv4", 00:16:52.541 "traddr": "10.0.0.1", 00:16:52.541 "trsvcid": "60380" 00:16:52.541 }, 00:16:52.541 "auth": { 00:16:52.541 "state": "completed", 00:16:52.541 "digest": "sha256", 00:16:52.541 "dhgroup": "null" 00:16:52.541 } 00:16:52.541 } 00:16:52.541 ]' 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.541 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:52.801 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:52.801 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:52.801 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.801 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.801 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.801 17:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:53.743 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:54.003 00:16:54.003 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:54.003 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:54.003 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.266 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.266 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.266 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.266 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.266 17:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.266 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:54.266 { 00:16:54.266 "cntlid": 3, 00:16:54.266 "qid": 0, 00:16:54.266 "state": "enabled", 00:16:54.266 "listen_address": { 00:16:54.266 "trtype": "TCP", 00:16:54.266 "adrfam": "IPv4", 00:16:54.266 "traddr": "10.0.0.2", 00:16:54.266 "trsvcid": "4420" 00:16:54.266 }, 00:16:54.266 "peer_address": { 00:16:54.266 "trtype": "TCP", 00:16:54.266 "adrfam": "IPv4", 00:16:54.266 "traddr": "10.0.0.1", 00:16:54.266 "trsvcid": "60412" 00:16:54.266 }, 00:16:54.266 "auth": { 00:16:54.266 "state": "completed", 00:16:54.266 "digest": "sha256", 00:16:54.266 "dhgroup": "null" 00:16:54.266 } 00:16:54.266 } 00:16:54.266 ]' 00:16:54.266 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:54.266 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.266 17:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:54.266 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:54.266 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:54.266 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.266 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.266 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.527 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:16:55.467 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.467 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.467 17:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.467 17:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.467 17:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.467 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:55.467 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:55.468 17:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:55.468 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:16:55.468 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:55.468 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.468 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:55.468 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:55.468 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:16:55.468 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.468 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.468 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.468 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:55.468 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:55.728 00:16:55.728 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:55.728 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:55.728 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.728 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.728 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.728 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.728 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.728 17:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.728 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:55.728 { 00:16:55.728 "cntlid": 5, 00:16:55.728 "qid": 0, 00:16:55.728 "state": "enabled", 00:16:55.728 "listen_address": { 00:16:55.728 "trtype": "TCP", 00:16:55.728 "adrfam": "IPv4", 00:16:55.728 "traddr": "10.0.0.2", 00:16:55.728 "trsvcid": "4420" 00:16:55.728 }, 00:16:55.728 "peer_address": { 00:16:55.728 "trtype": "TCP", 00:16:55.728 "adrfam": "IPv4", 00:16:55.728 "traddr": "10.0.0.1", 00:16:55.728 "trsvcid": "60436" 00:16:55.728 }, 00:16:55.728 "auth": { 00:16:55.728 "state": "completed", 00:16:55.728 "digest": "sha256", 00:16:55.728 "dhgroup": "null" 00:16:55.728 } 00:16:55.728 } 00:16:55.728 ]' 00:16:55.728 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:55.988 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.988 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:55.988 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:55.988 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:55.988 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.988 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.988 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.248 17:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:16:56.819 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.819 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.819 17:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.819 17:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.819 17:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.819 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:56.819 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:56.819 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:57.080 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:16:57.080 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:57.080 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.080 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:57.080 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.080 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:57.080 17:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.080 17:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.080 17:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.080 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.080 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.341 00:16:57.341 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:57.341 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:57.341 17:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.341 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.341 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.341 17:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.341 17:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.341 17:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.341 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:57.341 { 00:16:57.341 "cntlid": 7, 00:16:57.341 "qid": 0, 00:16:57.341 "state": "enabled", 00:16:57.341 "listen_address": { 00:16:57.341 "trtype": "TCP", 00:16:57.341 "adrfam": "IPv4", 00:16:57.341 "traddr": "10.0.0.2", 00:16:57.341 "trsvcid": "4420" 00:16:57.341 }, 00:16:57.341 "peer_address": { 00:16:57.341 "trtype": "TCP", 00:16:57.341 "adrfam": "IPv4", 00:16:57.341 "traddr": "10.0.0.1", 00:16:57.341 "trsvcid": "60466" 00:16:57.341 }, 00:16:57.341 "auth": { 00:16:57.341 "state": "completed", 00:16:57.341 "digest": "sha256", 00:16:57.341 "dhgroup": "null" 00:16:57.341 } 00:16:57.341 } 00:16:57.341 ]' 00:16:57.341 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:57.601 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.601 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:57.601 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:57.601 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:57.601 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.601 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.601 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.862 17:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:16:58.433 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.433 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.433 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.433 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.433 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.433 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.433 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:58.433 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.433 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.693 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:16:58.693 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:58.693 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.693 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:58.693 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:58.693 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:16:58.694 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.694 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.694 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.694 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:58.694 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:58.954 00:16:58.954 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:58.954 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:58.954 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.954 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.954 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.954 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.954 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.954 17:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.954 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:58.954 { 00:16:58.954 "cntlid": 9, 00:16:58.954 "qid": 0, 00:16:58.954 "state": "enabled", 00:16:58.954 "listen_address": { 00:16:58.954 "trtype": "TCP", 00:16:58.954 "adrfam": "IPv4", 00:16:58.954 "traddr": "10.0.0.2", 00:16:58.954 "trsvcid": "4420" 00:16:58.954 }, 00:16:58.954 "peer_address": { 00:16:58.954 "trtype": "TCP", 00:16:58.954 "adrfam": "IPv4", 00:16:58.954 "traddr": "10.0.0.1", 00:16:58.954 "trsvcid": "60502" 00:16:58.954 }, 00:16:58.954 "auth": { 00:16:58.954 "state": "completed", 00:16:58.954 "digest": "sha256", 00:16:58.954 "dhgroup": "ffdhe2048" 00:16:58.954 } 00:16:58.954 } 00:16:58.954 ]' 00:16:58.954 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:59.216 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.216 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:59.216 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.216 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:59.216 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.216 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.216 17:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.476 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:17:00.049 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.049 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.049 17:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.049 17:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.049 17:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.049 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:00.049 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:00.049 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:00.310 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:17:00.310 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:00.310 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.310 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:00.310 17:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:00.310 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:00.310 17:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.310 17:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.310 17:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.310 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:00.310 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:00.571 00:17:00.571 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:00.571 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:00.571 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.571 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.571 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.571 17:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.571 17:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.833 17:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.833 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:00.833 { 00:17:00.833 "cntlid": 11, 00:17:00.833 "qid": 0, 00:17:00.833 "state": "enabled", 00:17:00.833 "listen_address": { 00:17:00.833 "trtype": "TCP", 00:17:00.833 "adrfam": "IPv4", 00:17:00.833 "traddr": "10.0.0.2", 00:17:00.833 "trsvcid": "4420" 00:17:00.833 }, 00:17:00.833 "peer_address": { 00:17:00.833 "trtype": "TCP", 00:17:00.833 "adrfam": "IPv4", 00:17:00.833 "traddr": "10.0.0.1", 00:17:00.833 "trsvcid": "60526" 00:17:00.833 }, 00:17:00.833 "auth": { 00:17:00.833 "state": "completed", 00:17:00.833 "digest": "sha256", 00:17:00.833 "dhgroup": "ffdhe2048" 00:17:00.833 } 00:17:00.833 } 00:17:00.833 ]' 00:17:00.833 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:00.833 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.833 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:00.833 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.833 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:00.833 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.833 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.833 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.094 17:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:17:01.670 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.670 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.670 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.670 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.670 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.670 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:01.670 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.670 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.931 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:17:01.931 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:01.931 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.931 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:01.931 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.931 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:17:01.931 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.931 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.931 17:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.931 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:01.931 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.193 00:17:02.193 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:02.193 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:02.193 17:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.193 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.193 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.193 17:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.193 17:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.193 17:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.193 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:02.193 { 00:17:02.193 "cntlid": 13, 00:17:02.193 "qid": 0, 00:17:02.193 "state": "enabled", 00:17:02.193 "listen_address": { 00:17:02.193 "trtype": "TCP", 00:17:02.193 "adrfam": "IPv4", 00:17:02.193 "traddr": "10.0.0.2", 00:17:02.193 "trsvcid": "4420" 00:17:02.193 }, 00:17:02.193 "peer_address": { 00:17:02.193 "trtype": "TCP", 00:17:02.193 "adrfam": "IPv4", 00:17:02.193 "traddr": "10.0.0.1", 00:17:02.193 "trsvcid": "38642" 00:17:02.193 }, 00:17:02.193 "auth": { 00:17:02.193 "state": "completed", 00:17:02.193 "digest": "sha256", 00:17:02.193 "dhgroup": "ffdhe2048" 00:17:02.193 } 00:17:02.193 } 00:17:02.193 ]' 00:17:02.193 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:02.454 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.454 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:02.454 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:02.454 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:02.454 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.454 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.454 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.715 17:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:17:03.288 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.288 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.288 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.288 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.288 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.288 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:03.288 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:03.288 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:03.549 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:17:03.549 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:03.549 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.549 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:03.549 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.549 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:03.549 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.549 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.549 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.549 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.549 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.812 00:17:03.812 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:03.812 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:03.812 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:04.074 { 00:17:04.074 "cntlid": 15, 00:17:04.074 "qid": 0, 00:17:04.074 "state": "enabled", 00:17:04.074 "listen_address": { 00:17:04.074 "trtype": "TCP", 00:17:04.074 "adrfam": "IPv4", 00:17:04.074 "traddr": "10.0.0.2", 00:17:04.074 "trsvcid": "4420" 00:17:04.074 }, 00:17:04.074 "peer_address": { 00:17:04.074 "trtype": "TCP", 00:17:04.074 "adrfam": "IPv4", 00:17:04.074 "traddr": "10.0.0.1", 00:17:04.074 "trsvcid": "38668" 00:17:04.074 }, 00:17:04.074 "auth": { 00:17:04.074 "state": "completed", 00:17:04.074 "digest": "sha256", 00:17:04.074 "dhgroup": "ffdhe2048" 00:17:04.074 } 00:17:04.074 } 00:17:04.074 ]' 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.074 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.335 17:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:17:04.907 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.907 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.907 17:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.907 17:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.907 17:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.907 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.907 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:04.907 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.907 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.167 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:17:05.168 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:05.168 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.168 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:05.168 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.168 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:17:05.168 17:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.168 17:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.168 17:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.168 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:05.168 17:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:05.428 00:17:05.428 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:05.428 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.428 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:05.689 { 00:17:05.689 "cntlid": 17, 00:17:05.689 "qid": 0, 00:17:05.689 "state": "enabled", 00:17:05.689 "listen_address": { 00:17:05.689 "trtype": "TCP", 00:17:05.689 "adrfam": "IPv4", 00:17:05.689 "traddr": "10.0.0.2", 00:17:05.689 "trsvcid": "4420" 00:17:05.689 }, 00:17:05.689 "peer_address": { 00:17:05.689 "trtype": "TCP", 00:17:05.689 "adrfam": "IPv4", 00:17:05.689 "traddr": "10.0.0.1", 00:17:05.689 "trsvcid": "38692" 00:17:05.689 }, 00:17:05.689 "auth": { 00:17:05.689 "state": "completed", 00:17:05.689 "digest": "sha256", 00:17:05.689 "dhgroup": "ffdhe3072" 00:17:05.689 } 00:17:05.689 } 00:17:05.689 ]' 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.689 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.950 17:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:06.896 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:07.217 00:17:07.217 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:07.217 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:07.217 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.217 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.217 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.217 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.217 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.217 17:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.217 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:07.217 { 00:17:07.217 "cntlid": 19, 00:17:07.217 "qid": 0, 00:17:07.217 "state": "enabled", 00:17:07.217 "listen_address": { 00:17:07.217 "trtype": "TCP", 00:17:07.217 "adrfam": "IPv4", 00:17:07.217 "traddr": "10.0.0.2", 00:17:07.217 "trsvcid": "4420" 00:17:07.217 }, 00:17:07.217 "peer_address": { 00:17:07.217 "trtype": "TCP", 00:17:07.217 "adrfam": "IPv4", 00:17:07.217 "traddr": "10.0.0.1", 00:17:07.217 "trsvcid": "38728" 00:17:07.217 }, 00:17:07.217 "auth": { 00:17:07.217 "state": "completed", 00:17:07.217 "digest": "sha256", 00:17:07.217 "dhgroup": "ffdhe3072" 00:17:07.217 } 00:17:07.217 } 00:17:07.217 ]' 00:17:07.217 17:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:07.217 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.217 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:07.491 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:07.491 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:07.491 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.491 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.491 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.491 17:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:08.432 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:08.692 00:17:08.692 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:08.692 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:08.692 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.951 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.951 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.951 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.951 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.951 17:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.951 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:08.951 { 00:17:08.951 "cntlid": 21, 00:17:08.951 "qid": 0, 00:17:08.951 "state": "enabled", 00:17:08.951 "listen_address": { 00:17:08.951 "trtype": "TCP", 00:17:08.951 "adrfam": "IPv4", 00:17:08.951 "traddr": "10.0.0.2", 00:17:08.951 "trsvcid": "4420" 00:17:08.951 }, 00:17:08.951 "peer_address": { 00:17:08.951 "trtype": "TCP", 00:17:08.951 "adrfam": "IPv4", 00:17:08.951 "traddr": "10.0.0.1", 00:17:08.951 "trsvcid": "38752" 00:17:08.951 }, 00:17:08.951 "auth": { 00:17:08.951 "state": "completed", 00:17:08.951 "digest": "sha256", 00:17:08.951 "dhgroup": "ffdhe3072" 00:17:08.951 } 00:17:08.951 } 00:17:08.951 ]' 00:17:08.951 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:08.951 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.951 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:08.951 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.951 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:09.210 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.210 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.210 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.210 17:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.152 17:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.413 00:17:10.413 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:10.413 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:10.413 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:10.675 { 00:17:10.675 "cntlid": 23, 00:17:10.675 "qid": 0, 00:17:10.675 "state": "enabled", 00:17:10.675 "listen_address": { 00:17:10.675 "trtype": "TCP", 00:17:10.675 "adrfam": "IPv4", 00:17:10.675 "traddr": "10.0.0.2", 00:17:10.675 "trsvcid": "4420" 00:17:10.675 }, 00:17:10.675 "peer_address": { 00:17:10.675 "trtype": "TCP", 00:17:10.675 "adrfam": "IPv4", 00:17:10.675 "traddr": "10.0.0.1", 00:17:10.675 "trsvcid": "38786" 00:17:10.675 }, 00:17:10.675 "auth": { 00:17:10.675 "state": "completed", 00:17:10.675 "digest": "sha256", 00:17:10.675 "dhgroup": "ffdhe3072" 00:17:10.675 } 00:17:10.675 } 00:17:10.675 ]' 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.675 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.936 17:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:17:11.542 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.542 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.542 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.542 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.542 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.542 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.542 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:11.542 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.542 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.802 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:17:11.802 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:11.802 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:11.802 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:11.802 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:11.802 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:17:11.802 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.802 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.802 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.802 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:11.802 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:12.063 00:17:12.063 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:12.063 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.063 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:12.325 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.325 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.325 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.325 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.325 17:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.325 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:12.325 { 00:17:12.325 "cntlid": 25, 00:17:12.325 "qid": 0, 00:17:12.325 "state": "enabled", 00:17:12.325 "listen_address": { 00:17:12.325 "trtype": "TCP", 00:17:12.325 "adrfam": "IPv4", 00:17:12.325 "traddr": "10.0.0.2", 00:17:12.325 "trsvcid": "4420" 00:17:12.325 }, 00:17:12.325 "peer_address": { 00:17:12.325 "trtype": "TCP", 00:17:12.325 "adrfam": "IPv4", 00:17:12.325 "traddr": "10.0.0.1", 00:17:12.325 "trsvcid": "49878" 00:17:12.325 }, 00:17:12.325 "auth": { 00:17:12.325 "state": "completed", 00:17:12.325 "digest": "sha256", 00:17:12.325 "dhgroup": "ffdhe4096" 00:17:12.325 } 00:17:12.325 } 00:17:12.325 ]' 00:17:12.325 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:12.325 17:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.325 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:12.325 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.325 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:12.325 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.325 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.325 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.587 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:17:13.158 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.419 17:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.419 17:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.419 17:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:13.419 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:13.680 00:17:13.680 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:13.680 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:13.680 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:13.941 { 00:17:13.941 "cntlid": 27, 00:17:13.941 "qid": 0, 00:17:13.941 "state": "enabled", 00:17:13.941 "listen_address": { 00:17:13.941 "trtype": "TCP", 00:17:13.941 "adrfam": "IPv4", 00:17:13.941 "traddr": "10.0.0.2", 00:17:13.941 "trsvcid": "4420" 00:17:13.941 }, 00:17:13.941 "peer_address": { 00:17:13.941 "trtype": "TCP", 00:17:13.941 "adrfam": "IPv4", 00:17:13.941 "traddr": "10.0.0.1", 00:17:13.941 "trsvcid": "49888" 00:17:13.941 }, 00:17:13.941 "auth": { 00:17:13.941 "state": "completed", 00:17:13.941 "digest": "sha256", 00:17:13.941 "dhgroup": "ffdhe4096" 00:17:13.941 } 00:17:13.941 } 00:17:13.941 ]' 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.941 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.202 17:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:15.145 17:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:15.405 00:17:15.405 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:15.405 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:15.405 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:15.666 { 00:17:15.666 "cntlid": 29, 00:17:15.666 "qid": 0, 00:17:15.666 "state": "enabled", 00:17:15.666 "listen_address": { 00:17:15.666 "trtype": "TCP", 00:17:15.666 "adrfam": "IPv4", 00:17:15.666 "traddr": "10.0.0.2", 00:17:15.666 "trsvcid": "4420" 00:17:15.666 }, 00:17:15.666 "peer_address": { 00:17:15.666 "trtype": "TCP", 00:17:15.666 "adrfam": "IPv4", 00:17:15.666 "traddr": "10.0.0.1", 00:17:15.666 "trsvcid": "49922" 00:17:15.666 }, 00:17:15.666 "auth": { 00:17:15.666 "state": "completed", 00:17:15.666 "digest": "sha256", 00:17:15.666 "dhgroup": "ffdhe4096" 00:17:15.666 } 00:17:15.666 } 00:17:15.666 ]' 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.666 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.926 17:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:17:16.498 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.498 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.498 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.498 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.498 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.498 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:16.498 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:16.498 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:16.760 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:17:16.760 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:16.760 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:16.760 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:16.760 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:16.760 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:16.760 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.760 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.760 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.760 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:16.760 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.020 00:17:17.020 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:17.020 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:17.020 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.281 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.281 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.281 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.281 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.281 17:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.281 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:17.281 { 00:17:17.281 "cntlid": 31, 00:17:17.281 "qid": 0, 00:17:17.281 "state": "enabled", 00:17:17.281 "listen_address": { 00:17:17.281 "trtype": "TCP", 00:17:17.281 "adrfam": "IPv4", 00:17:17.281 "traddr": "10.0.0.2", 00:17:17.281 "trsvcid": "4420" 00:17:17.281 }, 00:17:17.281 "peer_address": { 00:17:17.281 "trtype": "TCP", 00:17:17.281 "adrfam": "IPv4", 00:17:17.281 "traddr": "10.0.0.1", 00:17:17.281 "trsvcid": "49944" 00:17:17.281 }, 00:17:17.281 "auth": { 00:17:17.281 "state": "completed", 00:17:17.281 "digest": "sha256", 00:17:17.281 "dhgroup": "ffdhe4096" 00:17:17.281 } 00:17:17.281 } 00:17:17.281 ]' 00:17:17.281 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:17.281 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.281 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:17.281 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.281 17:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:17.281 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.281 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.281 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.542 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:17:18.114 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.114 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.114 17:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.114 17:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.114 17:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.114 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.114 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:18.114 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:18.114 17:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:18.375 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:17:18.375 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:18.375 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.375 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:18.375 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:18.375 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:17:18.375 17:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.375 17:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.375 17:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.375 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:18.375 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:18.636 00:17:18.636 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:18.636 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:18.636 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.898 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.898 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.898 17:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.898 17:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.898 17:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.898 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:18.898 { 00:17:18.898 "cntlid": 33, 00:17:18.898 "qid": 0, 00:17:18.898 "state": "enabled", 00:17:18.898 "listen_address": { 00:17:18.898 "trtype": "TCP", 00:17:18.898 "adrfam": "IPv4", 00:17:18.898 "traddr": "10.0.0.2", 00:17:18.898 "trsvcid": "4420" 00:17:18.898 }, 00:17:18.898 "peer_address": { 00:17:18.898 "trtype": "TCP", 00:17:18.898 "adrfam": "IPv4", 00:17:18.898 "traddr": "10.0.0.1", 00:17:18.898 "trsvcid": "49972" 00:17:18.898 }, 00:17:18.898 "auth": { 00:17:18.898 "state": "completed", 00:17:18.898 "digest": "sha256", 00:17:18.898 "dhgroup": "ffdhe6144" 00:17:18.898 } 00:17:18.898 } 00:17:18.898 ]' 00:17:18.898 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:18.898 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.898 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:18.898 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.898 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:19.158 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.158 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.158 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.158 17:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:20.102 17:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:20.363 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:20.624 { 00:17:20.624 "cntlid": 35, 00:17:20.624 "qid": 0, 00:17:20.624 "state": "enabled", 00:17:20.624 "listen_address": { 00:17:20.624 "trtype": "TCP", 00:17:20.624 "adrfam": "IPv4", 00:17:20.624 "traddr": "10.0.0.2", 00:17:20.624 "trsvcid": "4420" 00:17:20.624 }, 00:17:20.624 "peer_address": { 00:17:20.624 "trtype": "TCP", 00:17:20.624 "adrfam": "IPv4", 00:17:20.624 "traddr": "10.0.0.1", 00:17:20.624 "trsvcid": "49998" 00:17:20.624 }, 00:17:20.624 "auth": { 00:17:20.624 "state": "completed", 00:17:20.624 "digest": "sha256", 00:17:20.624 "dhgroup": "ffdhe6144" 00:17:20.624 } 00:17:20.624 } 00:17:20.624 ]' 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.624 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:20.885 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.885 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:20.885 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.886 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.886 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.886 17:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:21.829 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:22.402 00:17:22.402 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:22.402 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.402 17:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:22.402 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.402 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.402 17:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.402 17:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.402 17:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.402 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:22.402 { 00:17:22.402 "cntlid": 37, 00:17:22.402 "qid": 0, 00:17:22.402 "state": "enabled", 00:17:22.402 "listen_address": { 00:17:22.402 "trtype": "TCP", 00:17:22.402 "adrfam": "IPv4", 00:17:22.402 "traddr": "10.0.0.2", 00:17:22.402 "trsvcid": "4420" 00:17:22.402 }, 00:17:22.402 "peer_address": { 00:17:22.402 "trtype": "TCP", 00:17:22.402 "adrfam": "IPv4", 00:17:22.402 "traddr": "10.0.0.1", 00:17:22.402 "trsvcid": "55108" 00:17:22.402 }, 00:17:22.402 "auth": { 00:17:22.402 "state": "completed", 00:17:22.402 "digest": "sha256", 00:17:22.402 "dhgroup": "ffdhe6144" 00:17:22.402 } 00:17:22.402 } 00:17:22.402 ]' 00:17:22.402 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:22.402 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.402 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:22.663 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.663 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:22.663 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.663 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.663 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.663 17:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.608 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.180 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:24.180 { 00:17:24.180 "cntlid": 39, 00:17:24.180 "qid": 0, 00:17:24.180 "state": "enabled", 00:17:24.180 "listen_address": { 00:17:24.180 "trtype": "TCP", 00:17:24.180 "adrfam": "IPv4", 00:17:24.180 "traddr": "10.0.0.2", 00:17:24.180 "trsvcid": "4420" 00:17:24.180 }, 00:17:24.180 "peer_address": { 00:17:24.180 "trtype": "TCP", 00:17:24.180 "adrfam": "IPv4", 00:17:24.180 "traddr": "10.0.0.1", 00:17:24.180 "trsvcid": "55132" 00:17:24.180 }, 00:17:24.180 "auth": { 00:17:24.180 "state": "completed", 00:17:24.180 "digest": "sha256", 00:17:24.180 "dhgroup": "ffdhe6144" 00:17:24.180 } 00:17:24.180 } 00:17:24.180 ]' 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.180 17:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:24.440 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.440 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:24.440 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.440 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.440 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.440 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:17:25.382 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.382 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.382 17:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.382 17:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.382 17:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.382 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.382 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:25.382 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:25.382 17:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:25.382 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:17:25.382 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:25.382 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:25.382 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:25.382 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.382 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:17:25.382 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.383 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.383 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.383 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:25.383 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:25.953 00:17:25.953 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:25.953 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:25.953 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.214 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.214 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.214 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.214 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.214 17:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.214 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:26.214 { 00:17:26.214 "cntlid": 41, 00:17:26.214 "qid": 0, 00:17:26.214 "state": "enabled", 00:17:26.214 "listen_address": { 00:17:26.214 "trtype": "TCP", 00:17:26.214 "adrfam": "IPv4", 00:17:26.214 "traddr": "10.0.0.2", 00:17:26.214 "trsvcid": "4420" 00:17:26.214 }, 00:17:26.214 "peer_address": { 00:17:26.214 "trtype": "TCP", 00:17:26.214 "adrfam": "IPv4", 00:17:26.214 "traddr": "10.0.0.1", 00:17:26.214 "trsvcid": "55152" 00:17:26.214 }, 00:17:26.214 "auth": { 00:17:26.214 "state": "completed", 00:17:26.214 "digest": "sha256", 00:17:26.214 "dhgroup": "ffdhe8192" 00:17:26.214 } 00:17:26.214 } 00:17:26.214 ]' 00:17:26.214 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:26.214 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.214 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:26.214 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.214 17:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:26.214 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.214 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.214 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.475 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:17:27.412 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.412 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.412 17:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.412 17:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.412 17:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.412 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:27.412 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.412 17:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.412 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:17:27.412 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:27.412 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.412 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:27.412 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:27.412 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:27.412 17:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.412 17:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.412 17:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.412 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:27.412 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:27.980 00:17:27.980 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:27.980 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:27.980 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.980 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.980 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.980 17:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.980 17:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.980 17:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.980 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:27.980 { 00:17:27.980 "cntlid": 43, 00:17:27.980 "qid": 0, 00:17:27.980 "state": "enabled", 00:17:27.980 "listen_address": { 00:17:27.980 "trtype": "TCP", 00:17:27.980 "adrfam": "IPv4", 00:17:27.980 "traddr": "10.0.0.2", 00:17:27.980 "trsvcid": "4420" 00:17:27.980 }, 00:17:27.980 "peer_address": { 00:17:27.980 "trtype": "TCP", 00:17:27.980 "adrfam": "IPv4", 00:17:27.980 "traddr": "10.0.0.1", 00:17:27.980 "trsvcid": "55194" 00:17:27.980 }, 00:17:27.980 "auth": { 00:17:27.980 "state": "completed", 00:17:27.980 "digest": "sha256", 00:17:27.980 "dhgroup": "ffdhe8192" 00:17:27.980 } 00:17:27.980 } 00:17:27.980 ]' 00:17:27.980 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:28.240 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.240 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:28.240 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.240 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:28.240 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.240 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.241 17:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.500 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:17:29.071 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.071 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.071 17:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.071 17:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.071 17:02:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.071 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:29.071 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.071 17:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.331 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:17:29.331 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:29.331 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.331 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:29.331 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:29.331 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:17:29.331 17:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.331 17:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.331 17:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.331 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.331 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.976 00:17:29.976 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:29.976 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:29.976 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.976 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.976 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.976 17:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.976 17:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.976 17:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.976 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:29.976 { 00:17:29.976 "cntlid": 45, 00:17:29.976 "qid": 0, 00:17:29.976 "state": "enabled", 00:17:29.976 "listen_address": { 00:17:29.976 "trtype": "TCP", 00:17:29.976 "adrfam": "IPv4", 00:17:29.976 "traddr": "10.0.0.2", 00:17:29.976 "trsvcid": "4420" 00:17:29.976 }, 00:17:29.976 "peer_address": { 00:17:29.976 "trtype": "TCP", 00:17:29.976 "adrfam": "IPv4", 00:17:29.976 "traddr": "10.0.0.1", 00:17:29.976 "trsvcid": "55212" 00:17:29.976 }, 00:17:29.976 "auth": { 00:17:29.976 "state": "completed", 00:17:29.976 "digest": "sha256", 00:17:29.976 "dhgroup": "ffdhe8192" 00:17:29.976 } 00:17:29.976 } 00:17:29.976 ]' 00:17:29.976 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:29.976 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.977 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:30.255 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.256 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:30.256 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.256 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.256 17:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.256 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.196 17:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.766 00:17:31.766 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:31.766 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.766 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:32.025 { 00:17:32.025 "cntlid": 47, 00:17:32.025 "qid": 0, 00:17:32.025 "state": "enabled", 00:17:32.025 "listen_address": { 00:17:32.025 "trtype": "TCP", 00:17:32.025 "adrfam": "IPv4", 00:17:32.025 "traddr": "10.0.0.2", 00:17:32.025 "trsvcid": "4420" 00:17:32.025 }, 00:17:32.025 "peer_address": { 00:17:32.025 "trtype": "TCP", 00:17:32.025 "adrfam": "IPv4", 00:17:32.025 "traddr": "10.0.0.1", 00:17:32.025 "trsvcid": "55248" 00:17:32.025 }, 00:17:32.025 "auth": { 00:17:32.025 "state": "completed", 00:17:32.025 "digest": "sha256", 00:17:32.025 "dhgroup": "ffdhe8192" 00:17:32.025 } 00:17:32.025 } 00:17:32.025 ]' 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.025 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.285 17:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:33.228 17:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:33.489 00:17:33.489 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:33.489 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:33.489 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.489 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.489 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.489 17:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.489 17:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.489 17:02:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.489 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:33.489 { 00:17:33.489 "cntlid": 49, 00:17:33.489 "qid": 0, 00:17:33.489 "state": "enabled", 00:17:33.489 "listen_address": { 00:17:33.489 "trtype": "TCP", 00:17:33.489 "adrfam": "IPv4", 00:17:33.489 "traddr": "10.0.0.2", 00:17:33.489 "trsvcid": "4420" 00:17:33.489 }, 00:17:33.489 "peer_address": { 00:17:33.489 "trtype": "TCP", 00:17:33.489 "adrfam": "IPv4", 00:17:33.489 "traddr": "10.0.0.1", 00:17:33.489 "trsvcid": "55880" 00:17:33.489 }, 00:17:33.489 "auth": { 00:17:33.489 "state": "completed", 00:17:33.489 "digest": "sha384", 00:17:33.489 "dhgroup": "null" 00:17:33.489 } 00:17:33.489 } 00:17:33.489 ]' 00:17:33.489 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:33.749 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.749 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:33.749 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:33.749 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:33.749 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.749 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.749 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.009 17:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:17:34.584 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.584 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.584 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.584 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.584 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.584 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:34.584 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:34.584 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:34.844 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:17:34.844 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:34.844 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.844 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:34.844 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:34.844 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:34.844 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.844 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.844 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.844 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:34.844 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:35.104 00:17:35.104 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:35.104 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:35.104 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.105 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.105 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.105 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.105 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.365 17:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.365 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:35.365 { 00:17:35.365 "cntlid": 51, 00:17:35.365 "qid": 0, 00:17:35.365 "state": "enabled", 00:17:35.365 "listen_address": { 00:17:35.365 "trtype": "TCP", 00:17:35.365 "adrfam": "IPv4", 00:17:35.365 "traddr": "10.0.0.2", 00:17:35.365 "trsvcid": "4420" 00:17:35.365 }, 00:17:35.365 "peer_address": { 00:17:35.365 "trtype": "TCP", 00:17:35.365 "adrfam": "IPv4", 00:17:35.365 "traddr": "10.0.0.1", 00:17:35.365 "trsvcid": "55916" 00:17:35.365 }, 00:17:35.365 "auth": { 00:17:35.365 "state": "completed", 00:17:35.365 "digest": "sha384", 00:17:35.365 "dhgroup": "null" 00:17:35.365 } 00:17:35.365 } 00:17:35.365 ]' 00:17:35.365 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:35.365 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.365 17:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:35.365 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:35.365 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:35.365 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.365 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.365 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.624 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:17:36.194 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.194 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.194 17:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.194 17:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.195 17:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.195 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:36.195 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:36.195 17:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:36.455 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:17:36.455 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:36.455 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.455 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:36.455 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:36.455 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:17:36.455 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.455 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.455 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.455 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:36.455 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:36.716 00:17:36.716 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:36.716 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:36.716 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.716 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.716 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.716 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.716 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.976 17:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.976 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:36.976 { 00:17:36.976 "cntlid": 53, 00:17:36.976 "qid": 0, 00:17:36.976 "state": "enabled", 00:17:36.976 "listen_address": { 00:17:36.976 "trtype": "TCP", 00:17:36.976 "adrfam": "IPv4", 00:17:36.976 "traddr": "10.0.0.2", 00:17:36.976 "trsvcid": "4420" 00:17:36.976 }, 00:17:36.976 "peer_address": { 00:17:36.976 "trtype": "TCP", 00:17:36.976 "adrfam": "IPv4", 00:17:36.976 "traddr": "10.0.0.1", 00:17:36.976 "trsvcid": "55936" 00:17:36.976 }, 00:17:36.976 "auth": { 00:17:36.976 "state": "completed", 00:17:36.976 "digest": "sha384", 00:17:36.976 "dhgroup": "null" 00:17:36.976 } 00:17:36.976 } 00:17:36.976 ]' 00:17:36.976 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:36.976 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.976 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:36.976 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:36.976 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:36.976 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.976 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.976 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.236 17:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:17:37.806 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.806 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.806 17:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.806 17:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.806 17:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.806 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:37.806 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:37.806 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:38.066 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:17:38.067 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:38.067 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.067 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:38.067 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.067 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:38.067 17:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.067 17:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.067 17:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.067 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.067 17:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.326 00:17:38.326 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:38.326 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:38.326 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:38.587 { 00:17:38.587 "cntlid": 55, 00:17:38.587 "qid": 0, 00:17:38.587 "state": "enabled", 00:17:38.587 "listen_address": { 00:17:38.587 "trtype": "TCP", 00:17:38.587 "adrfam": "IPv4", 00:17:38.587 "traddr": "10.0.0.2", 00:17:38.587 "trsvcid": "4420" 00:17:38.587 }, 00:17:38.587 "peer_address": { 00:17:38.587 "trtype": "TCP", 00:17:38.587 "adrfam": "IPv4", 00:17:38.587 "traddr": "10.0.0.1", 00:17:38.587 "trsvcid": "55962" 00:17:38.587 }, 00:17:38.587 "auth": { 00:17:38.587 "state": "completed", 00:17:38.587 "digest": "sha384", 00:17:38.587 "dhgroup": "null" 00:17:38.587 } 00:17:38.587 } 00:17:38.587 ]' 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.587 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.848 17:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:17:39.419 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.419 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.419 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.419 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.419 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.419 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.419 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:39.419 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.420 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.680 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:17:39.680 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:39.680 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:39.680 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:39.680 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.680 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:17:39.680 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.680 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.680 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.680 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:39.680 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:39.941 00:17:39.941 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:39.941 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:39.941 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:40.201 { 00:17:40.201 "cntlid": 57, 00:17:40.201 "qid": 0, 00:17:40.201 "state": "enabled", 00:17:40.201 "listen_address": { 00:17:40.201 "trtype": "TCP", 00:17:40.201 "adrfam": "IPv4", 00:17:40.201 "traddr": "10.0.0.2", 00:17:40.201 "trsvcid": "4420" 00:17:40.201 }, 00:17:40.201 "peer_address": { 00:17:40.201 "trtype": "TCP", 00:17:40.201 "adrfam": "IPv4", 00:17:40.201 "traddr": "10.0.0.1", 00:17:40.201 "trsvcid": "55976" 00:17:40.201 }, 00:17:40.201 "auth": { 00:17:40.201 "state": "completed", 00:17:40.201 "digest": "sha384", 00:17:40.201 "dhgroup": "ffdhe2048" 00:17:40.201 } 00:17:40.201 } 00:17:40.201 ]' 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.201 17:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.461 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:17:41.031 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.032 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.032 17:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.032 17:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.032 17:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.032 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:41.032 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:41.032 17:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:41.292 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:17:41.292 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:41.292 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:41.292 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:41.292 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.292 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:41.292 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.292 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.292 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.292 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:41.292 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:41.552 00:17:41.552 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:41.552 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:41.552 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:41.811 { 00:17:41.811 "cntlid": 59, 00:17:41.811 "qid": 0, 00:17:41.811 "state": "enabled", 00:17:41.811 "listen_address": { 00:17:41.811 "trtype": "TCP", 00:17:41.811 "adrfam": "IPv4", 00:17:41.811 "traddr": "10.0.0.2", 00:17:41.811 "trsvcid": "4420" 00:17:41.811 }, 00:17:41.811 "peer_address": { 00:17:41.811 "trtype": "TCP", 00:17:41.811 "adrfam": "IPv4", 00:17:41.811 "traddr": "10.0.0.1", 00:17:41.811 "trsvcid": "56002" 00:17:41.811 }, 00:17:41.811 "auth": { 00:17:41.811 "state": "completed", 00:17:41.811 "digest": "sha384", 00:17:41.811 "dhgroup": "ffdhe2048" 00:17:41.811 } 00:17:41.811 } 00:17:41.811 ]' 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.811 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.071 17:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:43.010 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:43.270 00:17:43.270 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:43.270 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:43.270 17:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.270 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.270 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.270 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.270 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.270 17:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.270 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:43.270 { 00:17:43.270 "cntlid": 61, 00:17:43.270 "qid": 0, 00:17:43.270 "state": "enabled", 00:17:43.270 "listen_address": { 00:17:43.270 "trtype": "TCP", 00:17:43.270 "adrfam": "IPv4", 00:17:43.270 "traddr": "10.0.0.2", 00:17:43.270 "trsvcid": "4420" 00:17:43.270 }, 00:17:43.270 "peer_address": { 00:17:43.270 "trtype": "TCP", 00:17:43.270 "adrfam": "IPv4", 00:17:43.270 "traddr": "10.0.0.1", 00:17:43.270 "trsvcid": "35676" 00:17:43.270 }, 00:17:43.270 "auth": { 00:17:43.270 "state": "completed", 00:17:43.270 "digest": "sha384", 00:17:43.270 "dhgroup": "ffdhe2048" 00:17:43.270 } 00:17:43.270 } 00:17:43.270 ]' 00:17:43.270 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:43.530 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.530 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:43.530 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.530 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:43.530 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.530 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.530 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.790 17:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:17:44.361 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.361 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.361 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.361 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.361 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.361 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:44.361 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:44.361 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:44.620 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:17:44.620 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:44.620 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.620 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:44.620 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.620 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:44.620 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.620 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.620 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.620 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.620 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.880 00:17:44.880 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:44.880 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:44.880 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:45.141 { 00:17:45.141 "cntlid": 63, 00:17:45.141 "qid": 0, 00:17:45.141 "state": "enabled", 00:17:45.141 "listen_address": { 00:17:45.141 "trtype": "TCP", 00:17:45.141 "adrfam": "IPv4", 00:17:45.141 "traddr": "10.0.0.2", 00:17:45.141 "trsvcid": "4420" 00:17:45.141 }, 00:17:45.141 "peer_address": { 00:17:45.141 "trtype": "TCP", 00:17:45.141 "adrfam": "IPv4", 00:17:45.141 "traddr": "10.0.0.1", 00:17:45.141 "trsvcid": "35690" 00:17:45.141 }, 00:17:45.141 "auth": { 00:17:45.141 "state": "completed", 00:17:45.141 "digest": "sha384", 00:17:45.141 "dhgroup": "ffdhe2048" 00:17:45.141 } 00:17:45.141 } 00:17:45.141 ]' 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.141 17:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.401 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:17:45.971 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.971 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.971 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.971 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.971 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.971 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.971 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:45.971 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:45.971 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:46.231 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:17:46.231 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:46.231 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:46.231 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:46.231 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.231 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:17:46.231 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.231 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.231 17:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.231 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:46.232 17:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:46.492 00:17:46.492 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:46.492 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:46.492 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.492 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.492 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.492 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.492 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.492 17:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.492 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:46.492 { 00:17:46.492 "cntlid": 65, 00:17:46.492 "qid": 0, 00:17:46.492 "state": "enabled", 00:17:46.492 "listen_address": { 00:17:46.492 "trtype": "TCP", 00:17:46.492 "adrfam": "IPv4", 00:17:46.492 "traddr": "10.0.0.2", 00:17:46.492 "trsvcid": "4420" 00:17:46.492 }, 00:17:46.492 "peer_address": { 00:17:46.492 "trtype": "TCP", 00:17:46.492 "adrfam": "IPv4", 00:17:46.492 "traddr": "10.0.0.1", 00:17:46.492 "trsvcid": "35732" 00:17:46.492 }, 00:17:46.492 "auth": { 00:17:46.492 "state": "completed", 00:17:46.492 "digest": "sha384", 00:17:46.492 "dhgroup": "ffdhe3072" 00:17:46.492 } 00:17:46.492 } 00:17:46.492 ]' 00:17:46.492 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:46.752 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.752 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:46.752 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.752 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:46.752 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.752 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.752 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.012 17:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:17:47.583 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.583 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.583 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.583 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.583 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.583 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:47.583 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.583 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.842 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:17:47.843 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:47.843 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:47.843 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:47.843 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:47.843 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:47.843 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.843 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.843 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.843 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:47.843 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:48.103 00:17:48.103 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:48.103 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:48.103 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.103 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.103 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.103 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.103 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.364 17:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.364 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:48.364 { 00:17:48.364 "cntlid": 67, 00:17:48.364 "qid": 0, 00:17:48.364 "state": "enabled", 00:17:48.364 "listen_address": { 00:17:48.364 "trtype": "TCP", 00:17:48.364 "adrfam": "IPv4", 00:17:48.364 "traddr": "10.0.0.2", 00:17:48.364 "trsvcid": "4420" 00:17:48.364 }, 00:17:48.364 "peer_address": { 00:17:48.364 "trtype": "TCP", 00:17:48.364 "adrfam": "IPv4", 00:17:48.364 "traddr": "10.0.0.1", 00:17:48.364 "trsvcid": "35758" 00:17:48.364 }, 00:17:48.364 "auth": { 00:17:48.364 "state": "completed", 00:17:48.364 "digest": "sha384", 00:17:48.364 "dhgroup": "ffdhe3072" 00:17:48.364 } 00:17:48.364 } 00:17:48.364 ]' 00:17:48.364 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:48.364 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.364 17:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:48.364 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.364 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:48.364 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.364 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.364 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.626 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:17:49.196 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.196 17:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.196 17:02:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.196 17:02:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.196 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.196 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:49.196 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.196 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.456 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:17:49.456 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:49.456 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:49.456 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:49.456 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:49.456 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:17:49.456 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.456 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.456 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.456 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:49.456 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:49.716 00:17:49.716 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:49.716 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:49.716 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:49.976 { 00:17:49.976 "cntlid": 69, 00:17:49.976 "qid": 0, 00:17:49.976 "state": "enabled", 00:17:49.976 "listen_address": { 00:17:49.976 "trtype": "TCP", 00:17:49.976 "adrfam": "IPv4", 00:17:49.976 "traddr": "10.0.0.2", 00:17:49.976 "trsvcid": "4420" 00:17:49.976 }, 00:17:49.976 "peer_address": { 00:17:49.976 "trtype": "TCP", 00:17:49.976 "adrfam": "IPv4", 00:17:49.976 "traddr": "10.0.0.1", 00:17:49.976 "trsvcid": "35782" 00:17:49.976 }, 00:17:49.976 "auth": { 00:17:49.976 "state": "completed", 00:17:49.976 "digest": "sha384", 00:17:49.976 "dhgroup": "ffdhe3072" 00:17:49.976 } 00:17:49.976 } 00:17:49.976 ]' 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.976 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.236 17:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.243 17:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.506 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:51.506 { 00:17:51.506 "cntlid": 71, 00:17:51.506 "qid": 0, 00:17:51.506 "state": "enabled", 00:17:51.506 "listen_address": { 00:17:51.506 "trtype": "TCP", 00:17:51.506 "adrfam": "IPv4", 00:17:51.506 "traddr": "10.0.0.2", 00:17:51.506 "trsvcid": "4420" 00:17:51.506 }, 00:17:51.506 "peer_address": { 00:17:51.506 "trtype": "TCP", 00:17:51.506 "adrfam": "IPv4", 00:17:51.506 "traddr": "10.0.0.1", 00:17:51.506 "trsvcid": "35814" 00:17:51.506 }, 00:17:51.506 "auth": { 00:17:51.506 "state": "completed", 00:17:51.506 "digest": "sha384", 00:17:51.506 "dhgroup": "ffdhe3072" 00:17:51.506 } 00:17:51.506 } 00:17:51.506 ]' 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.506 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:51.766 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.766 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:51.766 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.766 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.766 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.766 17:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:52.706 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:52.967 00:17:52.967 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:52.967 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:52.967 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.228 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.228 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.228 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.228 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.228 17:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.228 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:53.228 { 00:17:53.228 "cntlid": 73, 00:17:53.228 "qid": 0, 00:17:53.228 "state": "enabled", 00:17:53.228 "listen_address": { 00:17:53.228 "trtype": "TCP", 00:17:53.228 "adrfam": "IPv4", 00:17:53.228 "traddr": "10.0.0.2", 00:17:53.228 "trsvcid": "4420" 00:17:53.228 }, 00:17:53.228 "peer_address": { 00:17:53.228 "trtype": "TCP", 00:17:53.228 "adrfam": "IPv4", 00:17:53.228 "traddr": "10.0.0.1", 00:17:53.228 "trsvcid": "57532" 00:17:53.228 }, 00:17:53.228 "auth": { 00:17:53.228 "state": "completed", 00:17:53.228 "digest": "sha384", 00:17:53.228 "dhgroup": "ffdhe4096" 00:17:53.228 } 00:17:53.228 } 00:17:53.228 ]' 00:17:53.228 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:53.228 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.228 17:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:53.228 17:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.228 17:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:53.228 17:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.228 17:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.228 17:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.488 17:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:17:54.430 17:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.430 17:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.430 17:02:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.430 17:02:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.430 17:02:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.430 17:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:54.430 17:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.430 17:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.430 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:17:54.430 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:54.430 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.430 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:54.430 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:54.430 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:54.430 17:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.430 17:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.430 17:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.430 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:54.430 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:54.691 00:17:54.691 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:54.691 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:54.691 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:54.952 { 00:17:54.952 "cntlid": 75, 00:17:54.952 "qid": 0, 00:17:54.952 "state": "enabled", 00:17:54.952 "listen_address": { 00:17:54.952 "trtype": "TCP", 00:17:54.952 "adrfam": "IPv4", 00:17:54.952 "traddr": "10.0.0.2", 00:17:54.952 "trsvcid": "4420" 00:17:54.952 }, 00:17:54.952 "peer_address": { 00:17:54.952 "trtype": "TCP", 00:17:54.952 "adrfam": "IPv4", 00:17:54.952 "traddr": "10.0.0.1", 00:17:54.952 "trsvcid": "57562" 00:17:54.952 }, 00:17:54.952 "auth": { 00:17:54.952 "state": "completed", 00:17:54.952 "digest": "sha384", 00:17:54.952 "dhgroup": "ffdhe4096" 00:17:54.952 } 00:17:54.952 } 00:17:54.952 ]' 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.952 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.212 17:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:17:55.784 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.784 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.784 17:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.784 17:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:56.045 17:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:56.307 00:17:56.307 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:56.307 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:56.307 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:56.568 { 00:17:56.568 "cntlid": 77, 00:17:56.568 "qid": 0, 00:17:56.568 "state": "enabled", 00:17:56.568 "listen_address": { 00:17:56.568 "trtype": "TCP", 00:17:56.568 "adrfam": "IPv4", 00:17:56.568 "traddr": "10.0.0.2", 00:17:56.568 "trsvcid": "4420" 00:17:56.568 }, 00:17:56.568 "peer_address": { 00:17:56.568 "trtype": "TCP", 00:17:56.568 "adrfam": "IPv4", 00:17:56.568 "traddr": "10.0.0.1", 00:17:56.568 "trsvcid": "57592" 00:17:56.568 }, 00:17:56.568 "auth": { 00:17:56.568 "state": "completed", 00:17:56.568 "digest": "sha384", 00:17:56.568 "dhgroup": "ffdhe4096" 00:17:56.568 } 00:17:56.568 } 00:17:56.568 ]' 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.568 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.828 17:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.769 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.029 00:17:58.029 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:58.029 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:58.029 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.290 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.290 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.290 17:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.290 17:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.290 17:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.290 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:58.290 { 00:17:58.290 "cntlid": 79, 00:17:58.290 "qid": 0, 00:17:58.290 "state": "enabled", 00:17:58.290 "listen_address": { 00:17:58.290 "trtype": "TCP", 00:17:58.290 "adrfam": "IPv4", 00:17:58.290 "traddr": "10.0.0.2", 00:17:58.290 "trsvcid": "4420" 00:17:58.290 }, 00:17:58.290 "peer_address": { 00:17:58.290 "trtype": "TCP", 00:17:58.290 "adrfam": "IPv4", 00:17:58.290 "traddr": "10.0.0.1", 00:17:58.290 "trsvcid": "57616" 00:17:58.290 }, 00:17:58.290 "auth": { 00:17:58.290 "state": "completed", 00:17:58.290 "digest": "sha384", 00:17:58.290 "dhgroup": "ffdhe4096" 00:17:58.290 } 00:17:58.290 } 00:17:58.290 ]' 00:17:58.290 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:58.290 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.290 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:58.290 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.290 17:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:58.290 17:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.290 17:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.290 17:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.551 17:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:17:59.121 17:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.121 17:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.121 17:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.121 17:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.121 17:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.121 17:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.121 17:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:59.121 17:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:59.121 17:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:59.382 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:17:59.382 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:59.382 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.382 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:59.382 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.382 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:17:59.382 17:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.382 17:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.382 17:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.382 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:59.382 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:59.642 00:17:59.642 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:59.642 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:59.642 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.902 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.902 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.902 17:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.902 17:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.902 17:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.902 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:59.902 { 00:17:59.902 "cntlid": 81, 00:17:59.902 "qid": 0, 00:17:59.902 "state": "enabled", 00:17:59.902 "listen_address": { 00:17:59.902 "trtype": "TCP", 00:17:59.902 "adrfam": "IPv4", 00:17:59.902 "traddr": "10.0.0.2", 00:17:59.902 "trsvcid": "4420" 00:17:59.902 }, 00:17:59.902 "peer_address": { 00:17:59.902 "trtype": "TCP", 00:17:59.902 "adrfam": "IPv4", 00:17:59.902 "traddr": "10.0.0.1", 00:17:59.902 "trsvcid": "57654" 00:17:59.902 }, 00:17:59.902 "auth": { 00:17:59.902 "state": "completed", 00:17:59.902 "digest": "sha384", 00:17:59.902 "dhgroup": "ffdhe6144" 00:17:59.902 } 00:17:59.902 } 00:17:59.902 ]' 00:17:59.902 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:59.902 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.902 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:59.902 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.902 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:00.163 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.163 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.163 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.163 17:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:18:01.105 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.105 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.105 17:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.105 17:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.105 17:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.105 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:01.106 17:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:01.366 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:01.625 { 00:18:01.625 "cntlid": 83, 00:18:01.625 "qid": 0, 00:18:01.625 "state": "enabled", 00:18:01.625 "listen_address": { 00:18:01.625 "trtype": "TCP", 00:18:01.625 "adrfam": "IPv4", 00:18:01.625 "traddr": "10.0.0.2", 00:18:01.625 "trsvcid": "4420" 00:18:01.625 }, 00:18:01.625 "peer_address": { 00:18:01.625 "trtype": "TCP", 00:18:01.625 "adrfam": "IPv4", 00:18:01.625 "traddr": "10.0.0.1", 00:18:01.625 "trsvcid": "57676" 00:18:01.625 }, 00:18:01.625 "auth": { 00:18:01.625 "state": "completed", 00:18:01.625 "digest": "sha384", 00:18:01.625 "dhgroup": "ffdhe6144" 00:18:01.625 } 00:18:01.625 } 00:18:01.625 ]' 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.625 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:01.884 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.884 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:01.884 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.884 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.884 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.885 17:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:02.823 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:03.393 00:18:03.393 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:03.393 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.393 17:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:03.393 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.393 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.393 17:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.393 17:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.393 17:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.393 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:03.393 { 00:18:03.393 "cntlid": 85, 00:18:03.393 "qid": 0, 00:18:03.393 "state": "enabled", 00:18:03.393 "listen_address": { 00:18:03.393 "trtype": "TCP", 00:18:03.393 "adrfam": "IPv4", 00:18:03.393 "traddr": "10.0.0.2", 00:18:03.393 "trsvcid": "4420" 00:18:03.393 }, 00:18:03.393 "peer_address": { 00:18:03.393 "trtype": "TCP", 00:18:03.393 "adrfam": "IPv4", 00:18:03.393 "traddr": "10.0.0.1", 00:18:03.393 "trsvcid": "33354" 00:18:03.393 }, 00:18:03.393 "auth": { 00:18:03.393 "state": "completed", 00:18:03.393 "digest": "sha384", 00:18:03.393 "dhgroup": "ffdhe6144" 00:18:03.393 } 00:18:03.393 } 00:18:03.393 ]' 00:18:03.393 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:03.393 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.393 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:03.652 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.652 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:03.652 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.652 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.652 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.652 17:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.602 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.173 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:05.173 { 00:18:05.173 "cntlid": 87, 00:18:05.173 "qid": 0, 00:18:05.173 "state": "enabled", 00:18:05.173 "listen_address": { 00:18:05.173 "trtype": "TCP", 00:18:05.173 "adrfam": "IPv4", 00:18:05.173 "traddr": "10.0.0.2", 00:18:05.173 "trsvcid": "4420" 00:18:05.173 }, 00:18:05.173 "peer_address": { 00:18:05.173 "trtype": "TCP", 00:18:05.173 "adrfam": "IPv4", 00:18:05.173 "traddr": "10.0.0.1", 00:18:05.173 "trsvcid": "33376" 00:18:05.173 }, 00:18:05.173 "auth": { 00:18:05.173 "state": "completed", 00:18:05.173 "digest": "sha384", 00:18:05.173 "dhgroup": "ffdhe6144" 00:18:05.173 } 00:18:05.173 } 00:18:05.173 ]' 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.173 17:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:05.434 17:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.434 17:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.434 17:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.434 17:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:18:06.374 17:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.374 17:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.374 17:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.374 17:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.374 17:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.374 17:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.374 17:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:06.374 17:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:06.374 17:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:06.374 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:18:06.374 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:06.374 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:06.374 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:06.374 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:06.374 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:18:06.374 17:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.374 17:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.374 17:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.374 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:06.374 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:06.944 00:18:06.944 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:06.944 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.944 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:07.205 { 00:18:07.205 "cntlid": 89, 00:18:07.205 "qid": 0, 00:18:07.205 "state": "enabled", 00:18:07.205 "listen_address": { 00:18:07.205 "trtype": "TCP", 00:18:07.205 "adrfam": "IPv4", 00:18:07.205 "traddr": "10.0.0.2", 00:18:07.205 "trsvcid": "4420" 00:18:07.205 }, 00:18:07.205 "peer_address": { 00:18:07.205 "trtype": "TCP", 00:18:07.205 "adrfam": "IPv4", 00:18:07.205 "traddr": "10.0.0.1", 00:18:07.205 "trsvcid": "33414" 00:18:07.205 }, 00:18:07.205 "auth": { 00:18:07.205 "state": "completed", 00:18:07.205 "digest": "sha384", 00:18:07.205 "dhgroup": "ffdhe8192" 00:18:07.205 } 00:18:07.205 } 00:18:07.205 ]' 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.205 17:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.466 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:18:08.036 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.036 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.036 17:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.036 17:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.036 17:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.036 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:08.036 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.036 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.297 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:18:08.297 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:08.297 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.297 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:08.297 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.297 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:08.297 17:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.297 17:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.297 17:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.297 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:08.297 17:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:08.868 00:18:08.868 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:08.868 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:08.868 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.868 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.868 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.868 17:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.868 17:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.868 17:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.868 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:08.868 { 00:18:08.868 "cntlid": 91, 00:18:08.868 "qid": 0, 00:18:08.868 "state": "enabled", 00:18:08.868 "listen_address": { 00:18:08.868 "trtype": "TCP", 00:18:08.868 "adrfam": "IPv4", 00:18:08.868 "traddr": "10.0.0.2", 00:18:08.868 "trsvcid": "4420" 00:18:08.868 }, 00:18:08.868 "peer_address": { 00:18:08.868 "trtype": "TCP", 00:18:08.868 "adrfam": "IPv4", 00:18:08.868 "traddr": "10.0.0.1", 00:18:08.868 "trsvcid": "33448" 00:18:08.868 }, 00:18:08.868 "auth": { 00:18:08.868 "state": "completed", 00:18:08.868 "digest": "sha384", 00:18:08.868 "dhgroup": "ffdhe8192" 00:18:08.868 } 00:18:08.868 } 00:18:08.868 ]' 00:18:09.128 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:09.128 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.128 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:09.128 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.128 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:09.128 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.128 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.128 17:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.389 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:18:09.959 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.959 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.959 17:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.959 17:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.959 17:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.959 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:09.959 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.959 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:10.219 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:18:10.219 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:10.219 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.219 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:10.219 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:10.220 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:18:10.220 17:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.220 17:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.220 17:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.220 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:10.220 17:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:10.790 00:18:10.790 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:10.790 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:10.790 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:11.051 { 00:18:11.051 "cntlid": 93, 00:18:11.051 "qid": 0, 00:18:11.051 "state": "enabled", 00:18:11.051 "listen_address": { 00:18:11.051 "trtype": "TCP", 00:18:11.051 "adrfam": "IPv4", 00:18:11.051 "traddr": "10.0.0.2", 00:18:11.051 "trsvcid": "4420" 00:18:11.051 }, 00:18:11.051 "peer_address": { 00:18:11.051 "trtype": "TCP", 00:18:11.051 "adrfam": "IPv4", 00:18:11.051 "traddr": "10.0.0.1", 00:18:11.051 "trsvcid": "33472" 00:18:11.051 }, 00:18:11.051 "auth": { 00:18:11.051 "state": "completed", 00:18:11.051 "digest": "sha384", 00:18:11.051 "dhgroup": "ffdhe8192" 00:18:11.051 } 00:18:11.051 } 00:18:11.051 ]' 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.051 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.311 17:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:18:11.882 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.882 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.882 17:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.882 17:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.882 17:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.882 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:11.882 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.882 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:12.143 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:18:12.143 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:12.143 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:12.143 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.143 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.143 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:12.143 17:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.143 17:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.143 17:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.143 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.143 17:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.714 00:18:12.714 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:12.714 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:12.714 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:12.975 { 00:18:12.975 "cntlid": 95, 00:18:12.975 "qid": 0, 00:18:12.975 "state": "enabled", 00:18:12.975 "listen_address": { 00:18:12.975 "trtype": "TCP", 00:18:12.975 "adrfam": "IPv4", 00:18:12.975 "traddr": "10.0.0.2", 00:18:12.975 "trsvcid": "4420" 00:18:12.975 }, 00:18:12.975 "peer_address": { 00:18:12.975 "trtype": "TCP", 00:18:12.975 "adrfam": "IPv4", 00:18:12.975 "traddr": "10.0.0.1", 00:18:12.975 "trsvcid": "34008" 00:18:12.975 }, 00:18:12.975 "auth": { 00:18:12.975 "state": "completed", 00:18:12.975 "digest": "sha384", 00:18:12.975 "dhgroup": "ffdhe8192" 00:18:12.975 } 00:18:12.975 } 00:18:12.975 ]' 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.975 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.235 17:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:18:13.830 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.830 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.830 17:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.830 17:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.830 17:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.830 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:13.830 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.830 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:13.830 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:13.830 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:14.091 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:18:14.091 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:14.091 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.091 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:14.091 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.091 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:18:14.091 17:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.091 17:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.091 17:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.091 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:14.091 17:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:14.352 00:18:14.352 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:14.352 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:14.352 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.352 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.352 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.352 17:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.352 17:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.616 17:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.616 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:14.616 { 00:18:14.616 "cntlid": 97, 00:18:14.616 "qid": 0, 00:18:14.616 "state": "enabled", 00:18:14.616 "listen_address": { 00:18:14.616 "trtype": "TCP", 00:18:14.616 "adrfam": "IPv4", 00:18:14.616 "traddr": "10.0.0.2", 00:18:14.616 "trsvcid": "4420" 00:18:14.616 }, 00:18:14.616 "peer_address": { 00:18:14.616 "trtype": "TCP", 00:18:14.616 "adrfam": "IPv4", 00:18:14.616 "traddr": "10.0.0.1", 00:18:14.616 "trsvcid": "34046" 00:18:14.616 }, 00:18:14.616 "auth": { 00:18:14.616 "state": "completed", 00:18:14.616 "digest": "sha512", 00:18:14.616 "dhgroup": "null" 00:18:14.616 } 00:18:14.616 } 00:18:14.616 ]' 00:18:14.616 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:14.616 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.616 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:14.616 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:14.616 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:14.616 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.616 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.616 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.877 17:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:18:15.448 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.449 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.449 17:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.449 17:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.449 17:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.449 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:15.449 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:15.449 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:15.709 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:18:15.709 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:15.709 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.709 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:15.709 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.709 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:15.709 17:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.709 17:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.709 17:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.709 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:15.709 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:15.970 00:18:15.970 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:15.970 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:15.970 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.970 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.970 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.970 17:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.970 17:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.231 17:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.231 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:16.231 { 00:18:16.231 "cntlid": 99, 00:18:16.231 "qid": 0, 00:18:16.231 "state": "enabled", 00:18:16.231 "listen_address": { 00:18:16.231 "trtype": "TCP", 00:18:16.231 "adrfam": "IPv4", 00:18:16.231 "traddr": "10.0.0.2", 00:18:16.231 "trsvcid": "4420" 00:18:16.231 }, 00:18:16.231 "peer_address": { 00:18:16.231 "trtype": "TCP", 00:18:16.231 "adrfam": "IPv4", 00:18:16.231 "traddr": "10.0.0.1", 00:18:16.231 "trsvcid": "34082" 00:18:16.231 }, 00:18:16.231 "auth": { 00:18:16.231 "state": "completed", 00:18:16.231 "digest": "sha512", 00:18:16.231 "dhgroup": "null" 00:18:16.231 } 00:18:16.231 } 00:18:16.231 ]' 00:18:16.231 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:16.231 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.231 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:16.231 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:16.231 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:16.231 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.231 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.231 17:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.491 17:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:18:17.062 17:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.062 17:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.062 17:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.062 17:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.062 17:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.062 17:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:17.062 17:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:17.062 17:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:17.323 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:18:17.323 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:17.323 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.323 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:17.323 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.323 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:18:17.323 17:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.323 17:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.323 17:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.323 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:17.323 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:17.584 00:18:17.584 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:17.584 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:17.584 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.584 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.584 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.584 17:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.584 17:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.845 17:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.845 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:17.845 { 00:18:17.845 "cntlid": 101, 00:18:17.845 "qid": 0, 00:18:17.845 "state": "enabled", 00:18:17.845 "listen_address": { 00:18:17.845 "trtype": "TCP", 00:18:17.845 "adrfam": "IPv4", 00:18:17.845 "traddr": "10.0.0.2", 00:18:17.845 "trsvcid": "4420" 00:18:17.845 }, 00:18:17.845 "peer_address": { 00:18:17.845 "trtype": "TCP", 00:18:17.845 "adrfam": "IPv4", 00:18:17.845 "traddr": "10.0.0.1", 00:18:17.845 "trsvcid": "34112" 00:18:17.845 }, 00:18:17.845 "auth": { 00:18:17.845 "state": "completed", 00:18:17.845 "digest": "sha512", 00:18:17.845 "dhgroup": "null" 00:18:17.845 } 00:18:17.845 } 00:18:17.845 ]' 00:18:17.845 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:17.845 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.845 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:17.845 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:17.845 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:17.845 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.845 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.845 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.106 17:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:18:18.677 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.677 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.677 17:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.677 17:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.677 17:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.677 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:18.677 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:18.677 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:18.937 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:18:18.937 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:18.937 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.937 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:18.937 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:18.937 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:18.937 17:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.937 17:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.937 17:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.937 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.937 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.198 00:18:19.198 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:19.198 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:19.198 17:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:19.458 { 00:18:19.458 "cntlid": 103, 00:18:19.458 "qid": 0, 00:18:19.458 "state": "enabled", 00:18:19.458 "listen_address": { 00:18:19.458 "trtype": "TCP", 00:18:19.458 "adrfam": "IPv4", 00:18:19.458 "traddr": "10.0.0.2", 00:18:19.458 "trsvcid": "4420" 00:18:19.458 }, 00:18:19.458 "peer_address": { 00:18:19.458 "trtype": "TCP", 00:18:19.458 "adrfam": "IPv4", 00:18:19.458 "traddr": "10.0.0.1", 00:18:19.458 "trsvcid": "34140" 00:18:19.458 }, 00:18:19.458 "auth": { 00:18:19.458 "state": "completed", 00:18:19.458 "digest": "sha512", 00:18:19.458 "dhgroup": "null" 00:18:19.458 } 00:18:19.458 } 00:18:19.458 ]' 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.458 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.719 17:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:18:20.291 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.291 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.291 17:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.291 17:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.291 17:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.291 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.291 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:20.291 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:20.291 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:20.551 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:18:20.551 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:20.551 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.551 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:20.551 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:20.551 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:18:20.551 17:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.551 17:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.551 17:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.551 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:20.551 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:20.811 00:18:20.811 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:20.811 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:20.811 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:21.072 { 00:18:21.072 "cntlid": 105, 00:18:21.072 "qid": 0, 00:18:21.072 "state": "enabled", 00:18:21.072 "listen_address": { 00:18:21.072 "trtype": "TCP", 00:18:21.072 "adrfam": "IPv4", 00:18:21.072 "traddr": "10.0.0.2", 00:18:21.072 "trsvcid": "4420" 00:18:21.072 }, 00:18:21.072 "peer_address": { 00:18:21.072 "trtype": "TCP", 00:18:21.072 "adrfam": "IPv4", 00:18:21.072 "traddr": "10.0.0.1", 00:18:21.072 "trsvcid": "34178" 00:18:21.072 }, 00:18:21.072 "auth": { 00:18:21.072 "state": "completed", 00:18:21.072 "digest": "sha512", 00:18:21.072 "dhgroup": "ffdhe2048" 00:18:21.072 } 00:18:21.072 } 00:18:21.072 ]' 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.072 17:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.332 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:22.272 17:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:22.533 00:18:22.533 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:22.533 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.533 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:22.533 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.533 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.533 17:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.533 17:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.533 17:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.533 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:22.533 { 00:18:22.533 "cntlid": 107, 00:18:22.533 "qid": 0, 00:18:22.533 "state": "enabled", 00:18:22.533 "listen_address": { 00:18:22.533 "trtype": "TCP", 00:18:22.533 "adrfam": "IPv4", 00:18:22.533 "traddr": "10.0.0.2", 00:18:22.533 "trsvcid": "4420" 00:18:22.533 }, 00:18:22.533 "peer_address": { 00:18:22.533 "trtype": "TCP", 00:18:22.533 "adrfam": "IPv4", 00:18:22.533 "traddr": "10.0.0.1", 00:18:22.533 "trsvcid": "35404" 00:18:22.533 }, 00:18:22.533 "auth": { 00:18:22.533 "state": "completed", 00:18:22.533 "digest": "sha512", 00:18:22.533 "dhgroup": "ffdhe2048" 00:18:22.533 } 00:18:22.533 } 00:18:22.533 ]' 00:18:22.533 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:22.794 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.794 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:22.794 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.794 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:22.794 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.794 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.794 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.054 17:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:18:23.622 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.622 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.622 17:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.622 17:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.622 17:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.622 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:23.622 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:23.622 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:23.882 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:18:23.882 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:23.882 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.882 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:23.882 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.882 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:18:23.882 17:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.882 17:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.882 17:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.882 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:23.882 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:24.141 00:18:24.141 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:24.141 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:24.141 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.399 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.399 17:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.399 17:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.399 17:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.399 17:03:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.399 17:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:24.399 { 00:18:24.399 "cntlid": 109, 00:18:24.399 "qid": 0, 00:18:24.399 "state": "enabled", 00:18:24.399 "listen_address": { 00:18:24.399 "trtype": "TCP", 00:18:24.399 "adrfam": "IPv4", 00:18:24.399 "traddr": "10.0.0.2", 00:18:24.399 "trsvcid": "4420" 00:18:24.399 }, 00:18:24.399 "peer_address": { 00:18:24.399 "trtype": "TCP", 00:18:24.399 "adrfam": "IPv4", 00:18:24.399 "traddr": "10.0.0.1", 00:18:24.399 "trsvcid": "35422" 00:18:24.399 }, 00:18:24.399 "auth": { 00:18:24.399 "state": "completed", 00:18:24.399 "digest": "sha512", 00:18:24.399 "dhgroup": "ffdhe2048" 00:18:24.399 } 00:18:24.399 } 00:18:24.399 ]' 00:18:24.399 17:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:24.399 17:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.399 17:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:24.399 17:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:24.399 17:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:24.399 17:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.399 17:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.399 17:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.660 17:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:18:25.232 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.232 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.232 17:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.232 17:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.232 17:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.232 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:25.232 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:25.232 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:25.493 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:18:25.493 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:25.493 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.493 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:25.493 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.493 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:25.493 17:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.493 17:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.493 17:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.493 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.493 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.755 00:18:25.755 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:25.755 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:25.755 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:26.014 { 00:18:26.014 "cntlid": 111, 00:18:26.014 "qid": 0, 00:18:26.014 "state": "enabled", 00:18:26.014 "listen_address": { 00:18:26.014 "trtype": "TCP", 00:18:26.014 "adrfam": "IPv4", 00:18:26.014 "traddr": "10.0.0.2", 00:18:26.014 "trsvcid": "4420" 00:18:26.014 }, 00:18:26.014 "peer_address": { 00:18:26.014 "trtype": "TCP", 00:18:26.014 "adrfam": "IPv4", 00:18:26.014 "traddr": "10.0.0.1", 00:18:26.014 "trsvcid": "35448" 00:18:26.014 }, 00:18:26.014 "auth": { 00:18:26.014 "state": "completed", 00:18:26.014 "digest": "sha512", 00:18:26.014 "dhgroup": "ffdhe2048" 00:18:26.014 } 00:18:26.014 } 00:18:26.014 ]' 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.014 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.274 17:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:18:26.844 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.844 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.844 17:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.844 17:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.844 17:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.844 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.844 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:26.844 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:26.844 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:27.105 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:18:27.105 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:27.105 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.105 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:27.105 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.105 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:18:27.105 17:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.105 17:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.105 17:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.105 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:27.105 17:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:27.365 00:18:27.365 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:27.366 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:27.366 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.366 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.366 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.366 17:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.366 17:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.625 17:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.625 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:27.625 { 00:18:27.625 "cntlid": 113, 00:18:27.625 "qid": 0, 00:18:27.625 "state": "enabled", 00:18:27.625 "listen_address": { 00:18:27.625 "trtype": "TCP", 00:18:27.625 "adrfam": "IPv4", 00:18:27.625 "traddr": "10.0.0.2", 00:18:27.625 "trsvcid": "4420" 00:18:27.625 }, 00:18:27.625 "peer_address": { 00:18:27.625 "trtype": "TCP", 00:18:27.625 "adrfam": "IPv4", 00:18:27.625 "traddr": "10.0.0.1", 00:18:27.625 "trsvcid": "35482" 00:18:27.625 }, 00:18:27.625 "auth": { 00:18:27.625 "state": "completed", 00:18:27.625 "digest": "sha512", 00:18:27.625 "dhgroup": "ffdhe3072" 00:18:27.625 } 00:18:27.625 } 00:18:27.625 ]' 00:18:27.625 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:27.625 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.625 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:27.625 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.625 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:27.625 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.625 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.625 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.884 17:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:18:28.456 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.456 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.456 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.456 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.456 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.456 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:28.456 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:28.456 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:28.717 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:18:28.717 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:28.717 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.717 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:28.717 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.717 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:28.717 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.717 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.717 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.717 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:28.717 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:28.979 00:18:28.979 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:28.979 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:28.979 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:29.241 { 00:18:29.241 "cntlid": 115, 00:18:29.241 "qid": 0, 00:18:29.241 "state": "enabled", 00:18:29.241 "listen_address": { 00:18:29.241 "trtype": "TCP", 00:18:29.241 "adrfam": "IPv4", 00:18:29.241 "traddr": "10.0.0.2", 00:18:29.241 "trsvcid": "4420" 00:18:29.241 }, 00:18:29.241 "peer_address": { 00:18:29.241 "trtype": "TCP", 00:18:29.241 "adrfam": "IPv4", 00:18:29.241 "traddr": "10.0.0.1", 00:18:29.241 "trsvcid": "35514" 00:18:29.241 }, 00:18:29.241 "auth": { 00:18:29.241 "state": "completed", 00:18:29.241 "digest": "sha512", 00:18:29.241 "dhgroup": "ffdhe3072" 00:18:29.241 } 00:18:29.241 } 00:18:29.241 ]' 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.241 17:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.501 17:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:18:30.072 17:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.072 17:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.072 17:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.072 17:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.072 17:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.072 17:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:30.072 17:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:30.072 17:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:30.333 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:18:30.333 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:30.333 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:30.333 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:30.333 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:30.333 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:18:30.333 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.333 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.333 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.333 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:30.333 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:30.594 00:18:30.594 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:30.594 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:30.594 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:30.854 { 00:18:30.854 "cntlid": 117, 00:18:30.854 "qid": 0, 00:18:30.854 "state": "enabled", 00:18:30.854 "listen_address": { 00:18:30.854 "trtype": "TCP", 00:18:30.854 "adrfam": "IPv4", 00:18:30.854 "traddr": "10.0.0.2", 00:18:30.854 "trsvcid": "4420" 00:18:30.854 }, 00:18:30.854 "peer_address": { 00:18:30.854 "trtype": "TCP", 00:18:30.854 "adrfam": "IPv4", 00:18:30.854 "traddr": "10.0.0.1", 00:18:30.854 "trsvcid": "35538" 00:18:30.854 }, 00:18:30.854 "auth": { 00:18:30.854 "state": "completed", 00:18:30.854 "digest": "sha512", 00:18:30.854 "dhgroup": "ffdhe3072" 00:18:30.854 } 00:18:30.854 } 00:18:30.854 ]' 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.854 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.114 17:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:18:31.684 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.684 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.684 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.684 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.684 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.684 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:31.684 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:31.684 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:31.946 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:18:31.946 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:31.946 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.946 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:31.946 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.946 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:31.946 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.946 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.946 17:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.946 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.946 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.207 00:18:32.207 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:32.207 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:32.207 17:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.468 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.468 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.468 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.468 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.468 17:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.468 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:32.468 { 00:18:32.468 "cntlid": 119, 00:18:32.469 "qid": 0, 00:18:32.469 "state": "enabled", 00:18:32.469 "listen_address": { 00:18:32.469 "trtype": "TCP", 00:18:32.469 "adrfam": "IPv4", 00:18:32.469 "traddr": "10.0.0.2", 00:18:32.469 "trsvcid": "4420" 00:18:32.469 }, 00:18:32.469 "peer_address": { 00:18:32.469 "trtype": "TCP", 00:18:32.469 "adrfam": "IPv4", 00:18:32.469 "traddr": "10.0.0.1", 00:18:32.469 "trsvcid": "38084" 00:18:32.469 }, 00:18:32.469 "auth": { 00:18:32.469 "state": "completed", 00:18:32.469 "digest": "sha512", 00:18:32.469 "dhgroup": "ffdhe3072" 00:18:32.469 } 00:18:32.469 } 00:18:32.469 ]' 00:18:32.469 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:32.469 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.469 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:32.469 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.469 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:32.469 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.469 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.469 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.730 17:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:33.675 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:33.936 00:18:33.936 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:33.936 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:33.936 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:34.198 { 00:18:34.198 "cntlid": 121, 00:18:34.198 "qid": 0, 00:18:34.198 "state": "enabled", 00:18:34.198 "listen_address": { 00:18:34.198 "trtype": "TCP", 00:18:34.198 "adrfam": "IPv4", 00:18:34.198 "traddr": "10.0.0.2", 00:18:34.198 "trsvcid": "4420" 00:18:34.198 }, 00:18:34.198 "peer_address": { 00:18:34.198 "trtype": "TCP", 00:18:34.198 "adrfam": "IPv4", 00:18:34.198 "traddr": "10.0.0.1", 00:18:34.198 "trsvcid": "38120" 00:18:34.198 }, 00:18:34.198 "auth": { 00:18:34.198 "state": "completed", 00:18:34.198 "digest": "sha512", 00:18:34.198 "dhgroup": "ffdhe4096" 00:18:34.198 } 00:18:34.198 } 00:18:34.198 ]' 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.198 17:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.458 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:18:35.030 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.030 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.030 17:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.030 17:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.030 17:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.030 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:35.030 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:35.030 17:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:35.326 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:18:35.326 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:35.326 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.326 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:35.326 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:35.326 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:35.326 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.326 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.326 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.326 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:35.326 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:35.593 00:18:35.593 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:35.593 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:35.593 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:35.854 { 00:18:35.854 "cntlid": 123, 00:18:35.854 "qid": 0, 00:18:35.854 "state": "enabled", 00:18:35.854 "listen_address": { 00:18:35.854 "trtype": "TCP", 00:18:35.854 "adrfam": "IPv4", 00:18:35.854 "traddr": "10.0.0.2", 00:18:35.854 "trsvcid": "4420" 00:18:35.854 }, 00:18:35.854 "peer_address": { 00:18:35.854 "trtype": "TCP", 00:18:35.854 "adrfam": "IPv4", 00:18:35.854 "traddr": "10.0.0.1", 00:18:35.854 "trsvcid": "38140" 00:18:35.854 }, 00:18:35.854 "auth": { 00:18:35.854 "state": "completed", 00:18:35.854 "digest": "sha512", 00:18:35.854 "dhgroup": "ffdhe4096" 00:18:35.854 } 00:18:35.854 } 00:18:35.854 ]' 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.854 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.114 17:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:18:36.684 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.684 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.684 17:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.684 17:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.684 17:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.684 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:36.684 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:36.684 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:36.943 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:18:36.943 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:36.943 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:36.943 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:36.943 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.943 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:18:36.943 17:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.943 17:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.943 17:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.943 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:36.943 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:37.204 00:18:37.204 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:37.204 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.204 17:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:37.464 { 00:18:37.464 "cntlid": 125, 00:18:37.464 "qid": 0, 00:18:37.464 "state": "enabled", 00:18:37.464 "listen_address": { 00:18:37.464 "trtype": "TCP", 00:18:37.464 "adrfam": "IPv4", 00:18:37.464 "traddr": "10.0.0.2", 00:18:37.464 "trsvcid": "4420" 00:18:37.464 }, 00:18:37.464 "peer_address": { 00:18:37.464 "trtype": "TCP", 00:18:37.464 "adrfam": "IPv4", 00:18:37.464 "traddr": "10.0.0.1", 00:18:37.464 "trsvcid": "38168" 00:18:37.464 }, 00:18:37.464 "auth": { 00:18:37.464 "state": "completed", 00:18:37.464 "digest": "sha512", 00:18:37.464 "dhgroup": "ffdhe4096" 00:18:37.464 } 00:18:37.464 } 00:18:37.464 ]' 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.464 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.724 17:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:18:38.297 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.297 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.297 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.297 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.297 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.297 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:38.297 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:38.297 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:38.558 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:18:38.558 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:38.558 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.558 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:38.558 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.558 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:38.558 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.558 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.558 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.558 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.559 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.820 00:18:38.820 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:38.820 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:38.820 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:39.082 { 00:18:39.082 "cntlid": 127, 00:18:39.082 "qid": 0, 00:18:39.082 "state": "enabled", 00:18:39.082 "listen_address": { 00:18:39.082 "trtype": "TCP", 00:18:39.082 "adrfam": "IPv4", 00:18:39.082 "traddr": "10.0.0.2", 00:18:39.082 "trsvcid": "4420" 00:18:39.082 }, 00:18:39.082 "peer_address": { 00:18:39.082 "trtype": "TCP", 00:18:39.082 "adrfam": "IPv4", 00:18:39.082 "traddr": "10.0.0.1", 00:18:39.082 "trsvcid": "38198" 00:18:39.082 }, 00:18:39.082 "auth": { 00:18:39.082 "state": "completed", 00:18:39.082 "digest": "sha512", 00:18:39.082 "dhgroup": "ffdhe4096" 00:18:39.082 } 00:18:39.082 } 00:18:39.082 ]' 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.082 17:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.343 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.285 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:40.286 17:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:40.546 00:18:40.546 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:40.546 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:40.546 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:40.806 { 00:18:40.806 "cntlid": 129, 00:18:40.806 "qid": 0, 00:18:40.806 "state": "enabled", 00:18:40.806 "listen_address": { 00:18:40.806 "trtype": "TCP", 00:18:40.806 "adrfam": "IPv4", 00:18:40.806 "traddr": "10.0.0.2", 00:18:40.806 "trsvcid": "4420" 00:18:40.806 }, 00:18:40.806 "peer_address": { 00:18:40.806 "trtype": "TCP", 00:18:40.806 "adrfam": "IPv4", 00:18:40.806 "traddr": "10.0.0.1", 00:18:40.806 "trsvcid": "38222" 00:18:40.806 }, 00:18:40.806 "auth": { 00:18:40.806 "state": "completed", 00:18:40.806 "digest": "sha512", 00:18:40.806 "dhgroup": "ffdhe6144" 00:18:40.806 } 00:18:40.806 } 00:18:40.806 ]' 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:40.806 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.807 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.807 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.076 17:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:18:41.647 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.647 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.647 17:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.647 17:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.647 17:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.647 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:41.647 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:41.647 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:41.906 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:18:41.906 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:41.906 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.906 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:41.906 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:41.906 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:41.906 17:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.906 17:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.906 17:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.906 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:41.906 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:42.166 00:18:42.166 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:42.166 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.166 17:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:42.425 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.425 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.425 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.425 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.425 17:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.425 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:42.425 { 00:18:42.425 "cntlid": 131, 00:18:42.425 "qid": 0, 00:18:42.425 "state": "enabled", 00:18:42.425 "listen_address": { 00:18:42.425 "trtype": "TCP", 00:18:42.425 "adrfam": "IPv4", 00:18:42.425 "traddr": "10.0.0.2", 00:18:42.425 "trsvcid": "4420" 00:18:42.425 }, 00:18:42.425 "peer_address": { 00:18:42.425 "trtype": "TCP", 00:18:42.425 "adrfam": "IPv4", 00:18:42.425 "traddr": "10.0.0.1", 00:18:42.425 "trsvcid": "49790" 00:18:42.425 }, 00:18:42.425 "auth": { 00:18:42.425 "state": "completed", 00:18:42.425 "digest": "sha512", 00:18:42.425 "dhgroup": "ffdhe6144" 00:18:42.425 } 00:18:42.425 } 00:18:42.425 ]' 00:18:42.425 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:42.425 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.425 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:42.425 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.425 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:42.683 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.683 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.683 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.683 17:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:18:43.253 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.253 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.253 17:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.253 17:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.253 17:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.253 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:43.253 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:43.253 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:43.514 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:18:43.514 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:43.514 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:43.514 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.514 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:43.514 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:18:43.514 17:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.514 17:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.514 17:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.514 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:43.514 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:43.775 00:18:43.775 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:43.775 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:43.775 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:44.036 { 00:18:44.036 "cntlid": 133, 00:18:44.036 "qid": 0, 00:18:44.036 "state": "enabled", 00:18:44.036 "listen_address": { 00:18:44.036 "trtype": "TCP", 00:18:44.036 "adrfam": "IPv4", 00:18:44.036 "traddr": "10.0.0.2", 00:18:44.036 "trsvcid": "4420" 00:18:44.036 }, 00:18:44.036 "peer_address": { 00:18:44.036 "trtype": "TCP", 00:18:44.036 "adrfam": "IPv4", 00:18:44.036 "traddr": "10.0.0.1", 00:18:44.036 "trsvcid": "49824" 00:18:44.036 }, 00:18:44.036 "auth": { 00:18:44.036 "state": "completed", 00:18:44.036 "digest": "sha512", 00:18:44.036 "dhgroup": "ffdhe6144" 00:18:44.036 } 00:18:44.036 } 00:18:44.036 ]' 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.036 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.297 17:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.242 17:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.503 00:18:45.503 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:45.503 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:45.503 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.764 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.764 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.764 17:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.764 17:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.764 17:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.764 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:45.764 { 00:18:45.764 "cntlid": 135, 00:18:45.764 "qid": 0, 00:18:45.764 "state": "enabled", 00:18:45.764 "listen_address": { 00:18:45.764 "trtype": "TCP", 00:18:45.764 "adrfam": "IPv4", 00:18:45.764 "traddr": "10.0.0.2", 00:18:45.764 "trsvcid": "4420" 00:18:45.764 }, 00:18:45.764 "peer_address": { 00:18:45.764 "trtype": "TCP", 00:18:45.764 "adrfam": "IPv4", 00:18:45.764 "traddr": "10.0.0.1", 00:18:45.764 "trsvcid": "49834" 00:18:45.764 }, 00:18:45.764 "auth": { 00:18:45.764 "state": "completed", 00:18:45.764 "digest": "sha512", 00:18:45.764 "dhgroup": "ffdhe6144" 00:18:45.764 } 00:18:45.764 } 00:18:45.764 ]' 00:18:45.764 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:45.764 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.764 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:45.764 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.765 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:45.765 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.765 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.765 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.026 17:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:46.969 17:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:47.540 00:18:47.540 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:47.540 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:47.540 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.540 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.540 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.540 17:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.540 17:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.540 17:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.540 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:47.540 { 00:18:47.540 "cntlid": 137, 00:18:47.540 "qid": 0, 00:18:47.540 "state": "enabled", 00:18:47.540 "listen_address": { 00:18:47.540 "trtype": "TCP", 00:18:47.540 "adrfam": "IPv4", 00:18:47.540 "traddr": "10.0.0.2", 00:18:47.540 "trsvcid": "4420" 00:18:47.540 }, 00:18:47.540 "peer_address": { 00:18:47.540 "trtype": "TCP", 00:18:47.540 "adrfam": "IPv4", 00:18:47.540 "traddr": "10.0.0.1", 00:18:47.540 "trsvcid": "49856" 00:18:47.540 }, 00:18:47.540 "auth": { 00:18:47.540 "state": "completed", 00:18:47.540 "digest": "sha512", 00:18:47.540 "dhgroup": "ffdhe8192" 00:18:47.540 } 00:18:47.540 } 00:18:47.540 ]' 00:18:47.540 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:47.801 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.801 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:47.801 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.801 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:47.801 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.801 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.801 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.061 17:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:18:48.633 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.633 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.633 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.633 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.633 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.633 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:48.633 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:48.633 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:48.893 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:18:48.893 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:48.893 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.893 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:48.893 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.893 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:48.893 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.893 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.893 17:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.893 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:48.893 17:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:49.461 00:18:49.461 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:49.461 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:49.461 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.461 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.461 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.461 17:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.461 17:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.461 17:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.461 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:49.461 { 00:18:49.461 "cntlid": 139, 00:18:49.461 "qid": 0, 00:18:49.461 "state": "enabled", 00:18:49.461 "listen_address": { 00:18:49.461 "trtype": "TCP", 00:18:49.461 "adrfam": "IPv4", 00:18:49.461 "traddr": "10.0.0.2", 00:18:49.461 "trsvcid": "4420" 00:18:49.461 }, 00:18:49.461 "peer_address": { 00:18:49.461 "trtype": "TCP", 00:18:49.461 "adrfam": "IPv4", 00:18:49.461 "traddr": "10.0.0.1", 00:18:49.461 "trsvcid": "49882" 00:18:49.461 }, 00:18:49.461 "auth": { 00:18:49.461 "state": "completed", 00:18:49.461 "digest": "sha512", 00:18:49.461 "dhgroup": "ffdhe8192" 00:18:49.461 } 00:18:49.461 } 00:18:49.461 ]' 00:18:49.461 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:49.722 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.722 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:49.722 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:49.722 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:49.722 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.722 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.722 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.722 17:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OTk4NzNhNGFlMDYyZDY5ZWY5ZWU1ZWVmNTM3ODM1ODJsdxxp: 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:50.664 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:51.237 00:18:51.237 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:51.237 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:51.237 17:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.497 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.497 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.497 17:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.497 17:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.497 17:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.497 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:51.497 { 00:18:51.497 "cntlid": 141, 00:18:51.497 "qid": 0, 00:18:51.497 "state": "enabled", 00:18:51.497 "listen_address": { 00:18:51.497 "trtype": "TCP", 00:18:51.497 "adrfam": "IPv4", 00:18:51.497 "traddr": "10.0.0.2", 00:18:51.497 "trsvcid": "4420" 00:18:51.497 }, 00:18:51.497 "peer_address": { 00:18:51.497 "trtype": "TCP", 00:18:51.497 "adrfam": "IPv4", 00:18:51.497 "traddr": "10.0.0.1", 00:18:51.497 "trsvcid": "49916" 00:18:51.497 }, 00:18:51.497 "auth": { 00:18:51.497 "state": "completed", 00:18:51.497 "digest": "sha512", 00:18:51.497 "dhgroup": "ffdhe8192" 00:18:51.497 } 00:18:51.498 } 00:18:51.498 ]' 00:18:51.498 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:51.498 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.498 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:51.498 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.498 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:51.498 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.498 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.498 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.759 17:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MmEzMWJiODcxY2JjMjg5NDJhMjk5YWNmOGQwODkxMzg0NTkzMDNhZDU4OWFiNjgyjLrnvg==: 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.701 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.272 00:18:53.272 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:53.272 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:53.272 17:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.533 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.533 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.533 17:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.533 17:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.533 17:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.533 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:53.533 { 00:18:53.533 "cntlid": 143, 00:18:53.533 "qid": 0, 00:18:53.533 "state": "enabled", 00:18:53.533 "listen_address": { 00:18:53.533 "trtype": "TCP", 00:18:53.533 "adrfam": "IPv4", 00:18:53.533 "traddr": "10.0.0.2", 00:18:53.533 "trsvcid": "4420" 00:18:53.533 }, 00:18:53.533 "peer_address": { 00:18:53.533 "trtype": "TCP", 00:18:53.533 "adrfam": "IPv4", 00:18:53.534 "traddr": "10.0.0.1", 00:18:53.534 "trsvcid": "54262" 00:18:53.534 }, 00:18:53.534 "auth": { 00:18:53.534 "state": "completed", 00:18:53.534 "digest": "sha512", 00:18:53.534 "dhgroup": "ffdhe8192" 00:18:53.534 } 00:18:53.534 } 00:18:53.534 ]' 00:18:53.534 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:53.534 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.534 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:53.534 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:53.534 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:53.534 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.534 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.534 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.795 17:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:M2E5MmZiZGRjN2JlMWQ4ZDU3OWQ5NmNkYWIwMTNhZmYyNWFlYTE2YmY2NWRiN2JjYjY3NDI2OGQwMjNjOWU3YvOK5EQ=: 00:18:54.367 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.367 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.367 17:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.367 17:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.367 17:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.367 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:18:54.367 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:18:54.367 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:18:54.367 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.367 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.367 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:54.628 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:18:54.628 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:54.628 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.628 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.628 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.628 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:18:54.628 17:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.628 17:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.628 17:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.628 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:54.628 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:55.201 00:18:55.201 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:55.201 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:55.201 17:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:55.462 { 00:18:55.462 "cntlid": 145, 00:18:55.462 "qid": 0, 00:18:55.462 "state": "enabled", 00:18:55.462 "listen_address": { 00:18:55.462 "trtype": "TCP", 00:18:55.462 "adrfam": "IPv4", 00:18:55.462 "traddr": "10.0.0.2", 00:18:55.462 "trsvcid": "4420" 00:18:55.462 }, 00:18:55.462 "peer_address": { 00:18:55.462 "trtype": "TCP", 00:18:55.462 "adrfam": "IPv4", 00:18:55.462 "traddr": "10.0.0.1", 00:18:55.462 "trsvcid": "54286" 00:18:55.462 }, 00:18:55.462 "auth": { 00:18:55.462 "state": "completed", 00:18:55.462 "digest": "sha512", 00:18:55.462 "dhgroup": "ffdhe8192" 00:18:55.462 } 00:18:55.462 } 00:18:55.462 ]' 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.462 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.724 17:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YjdmOGIyNTY5ZGI1NWEyM2ZmMzAxNmFkNmIwNTdhNDk1ZTdkZDg1ODNmMTgxM2Q1EHZ2/A==: 00:18:56.295 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.295 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.295 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.295 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.295 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.295 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:56.295 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.295 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.556 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.556 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:56.556 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:56.556 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:56.556 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:56.556 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:56.556 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:56.556 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:56.556 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:56.556 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:56.816 request: 00:18:56.816 { 00:18:56.816 "name": "nvme0", 00:18:56.816 "trtype": "tcp", 00:18:56.816 "traddr": "10.0.0.2", 00:18:56.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.816 "adrfam": "ipv4", 00:18:56.816 "trsvcid": "4420", 00:18:56.816 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:56.816 "dhchap_key": "key2", 00:18:56.816 "method": "bdev_nvme_attach_controller", 00:18:56.816 "req_id": 1 00:18:56.816 } 00:18:56.816 Got JSON-RPC error response 00:18:56.816 response: 00:18:56.816 { 00:18:56.816 "code": -32602, 00:18:56.816 "message": "Invalid parameters" 00:18:56.816 } 00:18:56.816 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:56.816 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1454204 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1454204 ']' 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1454204 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:56.817 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1454204 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1454204' 00:18:57.116 killing process with pid 1454204 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1454204 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1454204 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:57.116 17:03:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:57.116 rmmod nvme_tcp 00:18:57.116 rmmod nvme_fabrics 00:18:57.116 rmmod nvme_keyring 00:18:57.406 17:03:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:57.406 17:03:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:57.406 17:03:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1453891 ']' 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1453891 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1453891 ']' 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1453891 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1453891 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1453891' 00:18:57.407 killing process with pid 1453891 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1453891 00:18:57.407 17:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1453891 00:18:57.407 17:03:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:57.407 17:03:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:57.407 17:03:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:57.407 17:03:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.407 17:03:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:57.407 17:03:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.407 17:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.407 17:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.950 17:03:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:59.950 17:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ut6 /tmp/spdk.key-sha256.pPD /tmp/spdk.key-sha384.KB2 /tmp/spdk.key-sha512.pCE /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:59.950 00:18:59.950 real 2m16.709s 00:18:59.950 user 5m3.499s 00:18:59.950 sys 0m20.135s 00:18:59.950 17:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:59.950 17:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.950 ************************************ 00:18:59.950 END TEST nvmf_auth_target 00:18:59.950 ************************************ 00:18:59.950 17:03:38 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:59.950 17:03:38 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:59.950 17:03:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:18:59.950 17:03:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:59.950 17:03:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:59.950 ************************************ 00:18:59.950 START TEST nvmf_bdevio_no_huge 00:18:59.950 ************************************ 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:59.950 * Looking for test storage... 00:18:59.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.950 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:59.951 17:03:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:06.548 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:06.548 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:06.548 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.548 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:06.549 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:06.549 17:03:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:06.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:19:06.549 00:19:06.549 --- 10.0.0.2 ping statistics --- 00:19:06.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.549 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:19:06.549 00:19:06.549 --- 10.0.0.1 ping statistics --- 00:19:06.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.549 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1484047 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1484047 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 1484047 ']' 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.549 17:03:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:06.549 [2024-05-15 17:03:45.227222] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:19:06.549 [2024-05-15 17:03:45.227278] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:06.549 [2024-05-15 17:03:45.312595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.810 [2024-05-15 17:03:45.413890] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.810 [2024-05-15 17:03:45.413945] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.810 [2024-05-15 17:03:45.413953] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.810 [2024-05-15 17:03:45.413960] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.810 [2024-05-15 17:03:45.413966] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.810 [2024-05-15 17:03:45.414136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:06.810 [2024-05-15 17:03:45.414294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:06.810 [2024-05-15 17:03:45.414456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.810 [2024-05-15 17:03:45.414456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.383 [2024-05-15 17:03:46.056627] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.383 Malloc0 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.383 [2024-05-15 17:03:46.109928] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:07.383 [2024-05-15 17:03:46.110306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:07.383 { 00:19:07.383 "params": { 00:19:07.383 "name": "Nvme$subsystem", 00:19:07.383 "trtype": "$TEST_TRANSPORT", 00:19:07.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:07.383 "adrfam": "ipv4", 00:19:07.383 "trsvcid": "$NVMF_PORT", 00:19:07.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:07.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:07.383 "hdgst": ${hdgst:-false}, 00:19:07.383 "ddgst": ${ddgst:-false} 00:19:07.383 }, 00:19:07.383 "method": "bdev_nvme_attach_controller" 00:19:07.383 } 00:19:07.383 EOF 00:19:07.383 )") 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:07.383 17:03:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:07.383 "params": { 00:19:07.383 "name": "Nvme1", 00:19:07.383 "trtype": "tcp", 00:19:07.383 "traddr": "10.0.0.2", 00:19:07.383 "adrfam": "ipv4", 00:19:07.383 "trsvcid": "4420", 00:19:07.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.384 "hdgst": false, 00:19:07.384 "ddgst": false 00:19:07.384 }, 00:19:07.384 "method": "bdev_nvme_attach_controller" 00:19:07.384 }' 00:19:07.384 [2024-05-15 17:03:46.171310] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:19:07.384 [2024-05-15 17:03:46.171423] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1484344 ] 00:19:07.644 [2024-05-15 17:03:46.243393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:07.644 [2024-05-15 17:03:46.339663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.644 [2024-05-15 17:03:46.339929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.644 [2024-05-15 17:03:46.339932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.904 I/O targets: 00:19:07.904 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:07.904 00:19:07.904 00:19:07.904 CUnit - A unit testing framework for C - Version 2.1-3 00:19:07.904 http://cunit.sourceforge.net/ 00:19:07.904 00:19:07.904 00:19:07.904 Suite: bdevio tests on: Nvme1n1 00:19:07.904 Test: blockdev write read block ...passed 00:19:07.904 Test: blockdev write zeroes read block ...passed 00:19:07.904 Test: blockdev write zeroes read no split ...passed 00:19:07.904 Test: blockdev write zeroes read split ...passed 00:19:07.904 Test: blockdev write zeroes read split partial ...passed 00:19:07.904 Test: blockdev reset ...[2024-05-15 17:03:46.660903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:07.904 [2024-05-15 17:03:46.660957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b26660 (9): Bad file descriptor 00:19:07.904 [2024-05-15 17:03:46.679343] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:07.904 passed 00:19:07.904 Test: blockdev write read 8 blocks ...passed 00:19:07.904 Test: blockdev write read size > 128k ...passed 00:19:07.904 Test: blockdev write read invalid size ...passed 00:19:07.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:07.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:07.904 Test: blockdev write read max offset ...passed 00:19:08.164 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:08.164 Test: blockdev writev readv 8 blocks ...passed 00:19:08.164 Test: blockdev writev readv 30 x 1block ...passed 00:19:08.164 Test: blockdev writev readv block ...passed 00:19:08.164 Test: blockdev writev readv size > 128k ...passed 00:19:08.164 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:08.164 Test: blockdev comparev and writev ...[2024-05-15 17:03:46.903392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.164 [2024-05-15 17:03:46.903417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:08.164 [2024-05-15 17:03:46.903428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.164 [2024-05-15 17:03:46.903434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.164 [2024-05-15 17:03:46.903955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.164 [2024-05-15 17:03:46.903964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:08.164 [2024-05-15 17:03:46.903973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.164 [2024-05-15 17:03:46.903979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:08.164 [2024-05-15 17:03:46.904462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.164 [2024-05-15 17:03:46.904470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:08.164 [2024-05-15 17:03:46.904479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.164 [2024-05-15 17:03:46.904485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:08.164 [2024-05-15 17:03:46.904966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.164 [2024-05-15 17:03:46.904975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:08.164 [2024-05-15 17:03:46.904984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:08.164 [2024-05-15 17:03:46.904989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:08.164 passed 00:19:08.164 Test: blockdev nvme passthru rw ...passed 00:19:08.164 Test: blockdev nvme passthru vendor specific ...[2024-05-15 17:03:46.989458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:08.164 [2024-05-15 17:03:46.989468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:08.164 [2024-05-15 17:03:46.989818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:08.164 [2024-05-15 17:03:46.989826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:08.164 [2024-05-15 17:03:46.990185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:08.164 [2024-05-15 17:03:46.990193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:08.164 [2024-05-15 17:03:46.990586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:08.164 [2024-05-15 17:03:46.990594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:08.164 passed 00:19:08.425 Test: blockdev nvme admin passthru ...passed 00:19:08.425 Test: blockdev copy ...passed 00:19:08.425 00:19:08.425 Run Summary: Type Total Ran Passed Failed Inactive 00:19:08.425 suites 1 1 n/a 0 0 00:19:08.425 tests 23 23 23 0 0 00:19:08.425 asserts 152 152 152 0 n/a 00:19:08.425 00:19:08.425 Elapsed time = 1.142 seconds 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.686 rmmod nvme_tcp 00:19:08.686 rmmod nvme_fabrics 00:19:08.686 rmmod nvme_keyring 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1484047 ']' 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1484047 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 1484047 ']' 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 1484047 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1484047 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1484047' 00:19:08.686 killing process with pid 1484047 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 1484047 00:19:08.686 [2024-05-15 17:03:47.455784] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:08.686 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 1484047 00:19:08.947 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:08.947 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:08.947 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:08.947 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.947 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.947 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.947 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.947 17:03:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.494 17:03:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:11.494 00:19:11.494 real 0m11.504s 00:19:11.494 user 0m12.789s 00:19:11.494 sys 0m5.935s 00:19:11.494 17:03:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:11.494 17:03:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.494 ************************************ 00:19:11.494 END TEST nvmf_bdevio_no_huge 00:19:11.494 ************************************ 00:19:11.494 17:03:49 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:11.494 17:03:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:11.494 17:03:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:11.494 17:03:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:11.494 ************************************ 00:19:11.494 START TEST nvmf_tls 00:19:11.494 ************************************ 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:11.494 * Looking for test storage... 00:19:11.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.494 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:11.495 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:11.495 17:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:11.495 17:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:18.082 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:18.082 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:18.082 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:18.082 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.082 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:18.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:19:18.083 00:19:18.083 --- 10.0.0.2 ping statistics --- 00:19:18.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.083 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:18.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:19:18.083 00:19:18.083 --- 10.0.0.1 ping statistics --- 00:19:18.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.083 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:18.083 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1488692 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1488692 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1488692 ']' 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:18.343 17:03:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.343 [2024-05-15 17:03:56.976896] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:19:18.343 [2024-05-15 17:03:56.976944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.343 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.344 [2024-05-15 17:03:57.061737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.344 [2024-05-15 17:03:57.144509] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.344 [2024-05-15 17:03:57.144576] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.344 [2024-05-15 17:03:57.144584] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.344 [2024-05-15 17:03:57.144591] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.344 [2024-05-15 17:03:57.144597] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.344 [2024-05-15 17:03:57.144624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.286 17:03:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:19.286 17:03:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:19.286 17:03:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:19.286 17:03:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.286 17:03:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.286 17:03:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.286 17:03:57 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:19.286 17:03:57 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:19.286 true 00:19:19.286 17:03:57 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:19.286 17:03:57 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:19.548 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:19.548 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:19.548 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:19.548 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:19.548 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:19.810 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:19.810 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:19.810 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:20.071 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:20.071 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:20.071 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:20.071 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:20.071 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:20.071 17:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:20.332 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:20.332 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:20.332 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:20.593 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:20.593 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:20.593 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:20.593 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:20.593 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:20.854 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:20.854 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:20.854 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:20.854 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:20.854 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:20.854 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:20.854 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:20.854 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:20.854 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:20.854 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:20.854 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.P75evqZIjs 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.2lTJQuUIJX 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.P75evqZIjs 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.2lTJQuUIJX 00:19:21.115 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:21.376 17:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:21.637 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.P75evqZIjs 00:19:21.637 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.P75evqZIjs 00:19:21.637 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:21.637 [2024-05-15 17:04:00.392721] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.637 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:21.897 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:21.897 [2024-05-15 17:04:00.701392] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:21.897 [2024-05-15 17:04:00.701436] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:21.897 [2024-05-15 17:04:00.701600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.897 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:22.157 malloc0 00:19:22.157 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:22.418 17:04:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.P75evqZIjs 00:19:22.418 [2024-05-15 17:04:01.116371] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:22.418 17:04:01 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.P75evqZIjs 00:19:22.418 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.409 Initializing NVMe Controllers 00:19:32.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:32.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:32.409 Initialization complete. Launching workers. 00:19:32.409 ======================================================== 00:19:32.409 Latency(us) 00:19:32.409 Device Information : IOPS MiB/s Average min max 00:19:32.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19061.99 74.46 3357.42 1146.64 4106.15 00:19:32.409 ======================================================== 00:19:32.409 Total : 19061.99 74.46 3357.42 1146.64 4106.15 00:19:32.409 00:19:32.409 17:04:11 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P75evqZIjs 00:19:32.409 17:04:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.409 17:04:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.409 17:04:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.409 17:04:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.P75evqZIjs' 00:19:32.409 17:04:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.409 17:04:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1491390 00:19:32.409 17:04:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.409 17:04:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1491390 /var/tmp/bdevperf.sock 00:19:32.410 17:04:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.410 17:04:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1491390 ']' 00:19:32.410 17:04:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.410 17:04:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:32.410 17:04:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.410 17:04:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:32.410 17:04:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.670 [2024-05-15 17:04:11.279765] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:19:32.670 [2024-05-15 17:04:11.279824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491390 ] 00:19:32.670 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.670 [2024-05-15 17:04:11.329515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.670 [2024-05-15 17:04:11.381774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.239 17:04:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:33.239 17:04:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:33.239 17:04:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.P75evqZIjs 00:19:33.500 [2024-05-15 17:04:12.166964] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.500 [2024-05-15 17:04:12.167019] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:33.500 TLSTESTn1 00:19:33.500 17:04:12 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:33.760 Running I/O for 10 seconds... 00:19:43.807 00:19:43.807 Latency(us) 00:19:43.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.808 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:43.808 Verification LBA range: start 0x0 length 0x2000 00:19:43.808 TLSTESTn1 : 10.02 5200.62 20.31 0.00 0.00 24577.74 4724.05 71652.69 00:19:43.808 =================================================================================================================== 00:19:43.808 Total : 5200.62 20.31 0.00 0.00 24577.74 4724.05 71652.69 00:19:43.808 0 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1491390 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1491390 ']' 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1491390 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1491390 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1491390' 00:19:43.808 killing process with pid 1491390 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1491390 00:19:43.808 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.808 00:19:43.808 Latency(us) 00:19:43.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.808 =================================================================================================================== 00:19:43.808 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.808 [2024-05-15 17:04:22.465096] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1491390 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2lTJQuUIJX 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2lTJQuUIJX 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2lTJQuUIJX 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2lTJQuUIJX' 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1493575 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1493575 /var/tmp/bdevperf.sock 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1493575 ']' 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:43.808 17:04:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.808 [2024-05-15 17:04:22.629198] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:19:43.808 [2024-05-15 17:04:22.629256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493575 ] 00:19:44.069 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.069 [2024-05-15 17:04:22.678008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.069 [2024-05-15 17:04:22.729828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.639 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:44.639 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:44.639 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2lTJQuUIJX 00:19:44.899 [2024-05-15 17:04:23.551061] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:44.899 [2024-05-15 17:04:23.551113] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:44.899 [2024-05-15 17:04:23.557327] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:44.899 [2024-05-15 17:04:23.558096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2001020 (107): Transport endpoint is not connected 00:19:44.899 [2024-05-15 17:04:23.559091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2001020 (9): Bad file descriptor 00:19:44.899 [2024-05-15 17:04:23.560093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:44.899 [2024-05-15 17:04:23.560101] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:44.899 [2024-05-15 17:04:23.560108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:44.899 request: 00:19:44.899 { 00:19:44.899 "name": "TLSTEST", 00:19:44.899 "trtype": "tcp", 00:19:44.899 "traddr": "10.0.0.2", 00:19:44.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.899 "adrfam": "ipv4", 00:19:44.899 "trsvcid": "4420", 00:19:44.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.899 "psk": "/tmp/tmp.2lTJQuUIJX", 00:19:44.899 "method": "bdev_nvme_attach_controller", 00:19:44.899 "req_id": 1 00:19:44.899 } 00:19:44.899 Got JSON-RPC error response 00:19:44.899 response: 00:19:44.899 { 00:19:44.899 "code": -32602, 00:19:44.899 "message": "Invalid parameters" 00:19:44.899 } 00:19:44.899 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1493575 00:19:44.899 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1493575 ']' 00:19:44.899 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1493575 00:19:44.899 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:44.899 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:44.899 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1493575 00:19:44.899 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:44.899 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:44.899 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1493575' 00:19:44.899 killing process with pid 1493575 00:19:44.899 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1493575 00:19:44.899 Received shutdown signal, test time was about 10.000000 seconds 00:19:44.899 00:19:44.899 Latency(us) 00:19:44.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.899 =================================================================================================================== 00:19:44.899 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:44.899 [2024-05-15 17:04:23.645497] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:44.899 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1493575 00:19:45.160 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:45.160 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:45.160 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:45.160 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:45.160 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:45.160 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.P75evqZIjs 00:19:45.160 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.P75evqZIjs 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.P75evqZIjs 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.P75evqZIjs' 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1493714 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1493714 /var/tmp/bdevperf.sock 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1493714 ']' 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:45.161 17:04:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.161 [2024-05-15 17:04:23.808827] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:19:45.161 [2024-05-15 17:04:23.808878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493714 ] 00:19:45.161 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.161 [2024-05-15 17:04:23.859212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.161 [2024-05-15 17:04:23.909591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.102 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:46.102 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:46.102 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.P75evqZIjs 00:19:46.102 [2024-05-15 17:04:24.718728] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.102 [2024-05-15 17:04:24.718792] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:46.102 [2024-05-15 17:04:24.730090] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:46.102 [2024-05-15 17:04:24.730110] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:46.102 [2024-05-15 17:04:24.730130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:46.102 [2024-05-15 17:04:24.730832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60c020 (107): Transport endpoint is not connected 00:19:46.102 [2024-05-15 17:04:24.731827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60c020 (9): Bad file descriptor 00:19:46.102 [2024-05-15 17:04:24.732829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:46.102 [2024-05-15 17:04:24.732837] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:46.102 [2024-05-15 17:04:24.732843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:46.102 request: 00:19:46.102 { 00:19:46.102 "name": "TLSTEST", 00:19:46.102 "trtype": "tcp", 00:19:46.102 "traddr": "10.0.0.2", 00:19:46.102 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:46.102 "adrfam": "ipv4", 00:19:46.102 "trsvcid": "4420", 00:19:46.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.102 "psk": "/tmp/tmp.P75evqZIjs", 00:19:46.102 "method": "bdev_nvme_attach_controller", 00:19:46.102 "req_id": 1 00:19:46.103 } 00:19:46.103 Got JSON-RPC error response 00:19:46.103 response: 00:19:46.103 { 00:19:46.103 "code": -32602, 00:19:46.103 "message": "Invalid parameters" 00:19:46.103 } 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1493714 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1493714 ']' 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1493714 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1493714 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1493714' 00:19:46.103 killing process with pid 1493714 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1493714 00:19:46.103 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.103 00:19:46.103 Latency(us) 00:19:46.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.103 =================================================================================================================== 00:19:46.103 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:46.103 [2024-05-15 17:04:24.817157] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1493714 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.P75evqZIjs 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.P75evqZIjs 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.P75evqZIjs 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.P75evqZIjs' 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1494050 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1494050 /var/tmp/bdevperf.sock 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1494050 ']' 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:46.103 17:04:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.364 [2024-05-15 17:04:24.969975] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:19:46.364 [2024-05-15 17:04:24.970027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494050 ] 00:19:46.364 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.364 [2024-05-15 17:04:25.021305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.364 [2024-05-15 17:04:25.071540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.935 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:46.935 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:46.935 17:04:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.P75evqZIjs 00:19:47.197 [2024-05-15 17:04:25.892725] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.197 [2024-05-15 17:04:25.892783] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:47.197 [2024-05-15 17:04:25.902945] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:47.197 [2024-05-15 17:04:25.902965] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:47.197 [2024-05-15 17:04:25.902985] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:47.197 [2024-05-15 17:04:25.903697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4e020 (107): Transport endpoint is not connected 00:19:47.197 [2024-05-15 17:04:25.904692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4e020 (9): Bad file descriptor 00:19:47.197 [2024-05-15 17:04:25.905693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:47.197 [2024-05-15 17:04:25.905702] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:47.197 [2024-05-15 17:04:25.905709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:47.197 request: 00:19:47.197 { 00:19:47.197 "name": "TLSTEST", 00:19:47.197 "trtype": "tcp", 00:19:47.197 "traddr": "10.0.0.2", 00:19:47.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.197 "adrfam": "ipv4", 00:19:47.197 "trsvcid": "4420", 00:19:47.197 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:47.197 "psk": "/tmp/tmp.P75evqZIjs", 00:19:47.197 "method": "bdev_nvme_attach_controller", 00:19:47.197 "req_id": 1 00:19:47.197 } 00:19:47.197 Got JSON-RPC error response 00:19:47.197 response: 00:19:47.197 { 00:19:47.197 "code": -32602, 00:19:47.197 "message": "Invalid parameters" 00:19:47.197 } 00:19:47.197 17:04:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1494050 00:19:47.197 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1494050 ']' 00:19:47.197 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1494050 00:19:47.197 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:47.197 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:47.197 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1494050 00:19:47.197 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:47.197 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:47.197 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1494050' 00:19:47.197 killing process with pid 1494050 00:19:47.197 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1494050 00:19:47.197 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.197 00:19:47.197 Latency(us) 00:19:47.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.197 =================================================================================================================== 00:19:47.197 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:47.197 [2024-05-15 17:04:25.976650] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:47.197 17:04:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1494050 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1494303 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1494303 /var/tmp/bdevperf.sock 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1494303 ']' 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:47.458 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.458 [2024-05-15 17:04:26.141181] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:19:47.458 [2024-05-15 17:04:26.141279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494303 ] 00:19:47.458 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.458 [2024-05-15 17:04:26.193911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.458 [2024-05-15 17:04:26.246253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.399 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:48.399 17:04:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:48.399 17:04:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:48.399 [2024-05-15 17:04:27.055797] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:48.399 [2024-05-15 17:04:27.057817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbffa10 (9): Bad file descriptor 00:19:48.399 [2024-05-15 17:04:27.058816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:48.399 [2024-05-15 17:04:27.058824] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:48.399 [2024-05-15 17:04:27.058831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:48.399 request: 00:19:48.399 { 00:19:48.399 "name": "TLSTEST", 00:19:48.399 "trtype": "tcp", 00:19:48.399 "traddr": "10.0.0.2", 00:19:48.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.399 "adrfam": "ipv4", 00:19:48.399 "trsvcid": "4420", 00:19:48.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.399 "method": "bdev_nvme_attach_controller", 00:19:48.399 "req_id": 1 00:19:48.399 } 00:19:48.399 Got JSON-RPC error response 00:19:48.399 response: 00:19:48.399 { 00:19:48.399 "code": -32602, 00:19:48.399 "message": "Invalid parameters" 00:19:48.399 } 00:19:48.399 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1494303 00:19:48.399 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1494303 ']' 00:19:48.399 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1494303 00:19:48.400 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:48.400 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:48.400 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1494303 00:19:48.400 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:48.400 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:48.400 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1494303' 00:19:48.400 killing process with pid 1494303 00:19:48.400 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1494303 00:19:48.400 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.400 00:19:48.400 Latency(us) 00:19:48.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.400 =================================================================================================================== 00:19:48.400 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.400 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1494303 00:19:48.660 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:48.660 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:48.660 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:48.660 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:48.660 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:48.660 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1488692 00:19:48.660 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1488692 ']' 00:19:48.660 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1488692 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1488692 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1488692' 00:19:48.661 killing process with pid 1488692 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1488692 00:19:48.661 [2024-05-15 17:04:27.301510] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:48.661 [2024-05-15 17:04:27.301533] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1488692 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.eTUxjz2W60 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.eTUxjz2W60 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1494502 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1494502 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1494502 ']' 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:48.661 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.921 [2024-05-15 17:04:27.534649] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:19:48.922 [2024-05-15 17:04:27.534705] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.922 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.922 [2024-05-15 17:04:27.618403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.922 [2024-05-15 17:04:27.674382] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.922 [2024-05-15 17:04:27.674415] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.922 [2024-05-15 17:04:27.674420] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.922 [2024-05-15 17:04:27.674425] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.922 [2024-05-15 17:04:27.674429] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.922 [2024-05-15 17:04:27.674443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.182 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:49.182 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:49.182 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.182 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.182 17:04:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.182 17:04:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.182 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.eTUxjz2W60 00:19:49.182 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eTUxjz2W60 00:19:49.182 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:49.182 [2024-05-15 17:04:27.939320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.182 17:04:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:49.443 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:49.443 [2024-05-15 17:04:28.252077] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:49.443 [2024-05-15 17:04:28.252114] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.443 [2024-05-15 17:04:28.252270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.443 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:49.704 malloc0 00:19:49.704 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eTUxjz2W60 00:19:49.964 [2024-05-15 17:04:28.735272] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eTUxjz2W60 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eTUxjz2W60' 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1494769 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1494769 /var/tmp/bdevperf.sock 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1494769 ']' 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:49.964 17:04:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.225 [2024-05-15 17:04:28.799045] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:19:50.225 [2024-05-15 17:04:28.799094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494769 ] 00:19:50.225 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.225 [2024-05-15 17:04:28.848583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.225 [2024-05-15 17:04:28.900526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.796 17:04:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:50.796 17:04:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:50.796 17:04:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eTUxjz2W60 00:19:51.057 [2024-05-15 17:04:29.685646] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.057 [2024-05-15 17:04:29.685697] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:51.057 TLSTESTn1 00:19:51.057 17:04:29 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:51.057 Running I/O for 10 seconds... 00:20:03.286 00:20:03.286 Latency(us) 00:20:03.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.286 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:03.286 Verification LBA range: start 0x0 length 0x2000 00:20:03.286 TLSTESTn1 : 10.01 5586.47 21.82 0.00 0.00 22881.33 5051.73 68157.44 00:20:03.286 =================================================================================================================== 00:20:03.286 Total : 5586.47 21.82 0.00 0.00 22881.33 5051.73 68157.44 00:20:03.287 0 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1494769 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1494769 ']' 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1494769 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1494769 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1494769' 00:20:03.287 killing process with pid 1494769 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1494769 00:20:03.287 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.287 00:20:03.287 Latency(us) 00:20:03.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.287 =================================================================================================================== 00:20:03.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.287 [2024-05-15 17:04:39.970240] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:03.287 17:04:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1494769 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.eTUxjz2W60 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eTUxjz2W60 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eTUxjz2W60 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eTUxjz2W60 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eTUxjz2W60' 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1497049 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1497049 /var/tmp/bdevperf.sock 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1497049 ']' 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.287 [2024-05-15 17:04:40.147423] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:03.287 [2024-05-15 17:04:40.147494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497049 ] 00:20:03.287 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.287 [2024-05-15 17:04:40.198804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.287 [2024-05-15 17:04:40.251457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:03.287 17:04:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eTUxjz2W60 00:20:03.287 [2024-05-15 17:04:41.076649] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.287 [2024-05-15 17:04:41.076686] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:03.287 [2024-05-15 17:04:41.076691] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.eTUxjz2W60 00:20:03.287 request: 00:20:03.287 { 00:20:03.287 "name": "TLSTEST", 00:20:03.287 "trtype": "tcp", 00:20:03.287 "traddr": "10.0.0.2", 00:20:03.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.287 "adrfam": "ipv4", 00:20:03.287 "trsvcid": "4420", 00:20:03.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.287 "psk": "/tmp/tmp.eTUxjz2W60", 00:20:03.287 "method": "bdev_nvme_attach_controller", 00:20:03.287 "req_id": 1 00:20:03.287 } 00:20:03.287 Got JSON-RPC error response 00:20:03.287 response: 00:20:03.287 { 00:20:03.287 "code": -1, 00:20:03.287 "message": "Operation not permitted" 00:20:03.287 } 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1497049 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1497049 ']' 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1497049 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1497049 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1497049' 00:20:03.287 killing process with pid 1497049 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1497049 00:20:03.287 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.287 00:20:03.287 Latency(us) 00:20:03.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.287 =================================================================================================================== 00:20:03.287 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1497049 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1494502 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1494502 ']' 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1494502 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1494502 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1494502' 00:20:03.287 killing process with pid 1494502 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1494502 00:20:03.287 [2024-05-15 17:04:41.325504] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:03.287 [2024-05-15 17:04:41.325539] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1494502 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1497203 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1497203 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1497203 ']' 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:03.287 17:04:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.287 [2024-05-15 17:04:41.497571] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:03.287 [2024-05-15 17:04:41.497621] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.288 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.288 [2024-05-15 17:04:41.578990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.288 [2024-05-15 17:04:41.631677] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.288 [2024-05-15 17:04:41.631709] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.288 [2024-05-15 17:04:41.631714] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.288 [2024-05-15 17:04:41.631722] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.288 [2024-05-15 17:04:41.631726] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.288 [2024-05-15 17:04:41.631740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.eTUxjz2W60 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.eTUxjz2W60 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.eTUxjz2W60 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eTUxjz2W60 00:20:03.548 17:04:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:03.808 [2024-05-15 17:04:42.441709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.808 17:04:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:03.808 17:04:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:04.068 [2024-05-15 17:04:42.750452] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:04.068 [2024-05-15 17:04:42.750489] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:04.068 [2024-05-15 17:04:42.750659] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.068 17:04:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:04.330 malloc0 00:20:04.330 17:04:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:04.330 17:04:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eTUxjz2W60 00:20:04.590 [2024-05-15 17:04:43.213598] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:04.590 [2024-05-15 17:04:43.213617] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:04.590 [2024-05-15 17:04:43.213636] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:04.590 request: 00:20:04.590 { 00:20:04.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.590 "host": "nqn.2016-06.io.spdk:host1", 00:20:04.590 "psk": "/tmp/tmp.eTUxjz2W60", 00:20:04.590 "method": "nvmf_subsystem_add_host", 00:20:04.590 "req_id": 1 00:20:04.590 } 00:20:04.590 Got JSON-RPC error response 00:20:04.590 response: 00:20:04.590 { 00:20:04.590 "code": -32603, 00:20:04.590 "message": "Internal error" 00:20:04.590 } 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1497203 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1497203 ']' 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1497203 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1497203 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1497203' 00:20:04.590 killing process with pid 1497203 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1497203 00:20:04.590 [2024-05-15 17:04:43.281281] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1497203 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.eTUxjz2W60 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1497680 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1497680 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1497680 ']' 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:04.590 17:04:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.849 [2024-05-15 17:04:43.431280] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:04.849 [2024-05-15 17:04:43.431324] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.849 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.849 [2024-05-15 17:04:43.502685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.849 [2024-05-15 17:04:43.555459] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.849 [2024-05-15 17:04:43.555490] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.849 [2024-05-15 17:04:43.555496] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.849 [2024-05-15 17:04:43.555500] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.849 [2024-05-15 17:04:43.555504] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.849 [2024-05-15 17:04:43.555519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.416 17:04:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:05.416 17:04:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:05.416 17:04:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.416 17:04:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.416 17:04:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.675 17:04:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.675 17:04:44 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.eTUxjz2W60 00:20:05.675 17:04:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eTUxjz2W60 00:20:05.675 17:04:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:05.675 [2024-05-15 17:04:44.393776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.675 17:04:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:05.934 17:04:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:05.934 [2024-05-15 17:04:44.702518] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:05.934 [2024-05-15 17:04:44.702560] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:05.934 [2024-05-15 17:04:44.702718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.934 17:04:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:06.193 malloc0 00:20:06.193 17:04:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:06.453 17:04:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eTUxjz2W60 00:20:06.453 [2024-05-15 17:04:45.169690] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:06.453 17:04:45 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1498052 00:20:06.453 17:04:45 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.453 17:04:45 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.453 17:04:45 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1498052 /var/tmp/bdevperf.sock 00:20:06.453 17:04:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1498052 ']' 00:20:06.453 17:04:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.453 17:04:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:06.453 17:04:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.453 17:04:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:06.453 17:04:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.453 [2024-05-15 17:04:45.231703] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:06.453 [2024-05-15 17:04:45.231751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498052 ] 00:20:06.453 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.453 [2024-05-15 17:04:45.280611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.712 [2024-05-15 17:04:45.332740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.283 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:07.283 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:07.283 17:04:46 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eTUxjz2W60 00:20:07.543 [2024-05-15 17:04:46.149977] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.543 [2024-05-15 17:04:46.150031] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:07.543 TLSTESTn1 00:20:07.543 17:04:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:07.804 17:04:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:07.804 "subsystems": [ 00:20:07.804 { 00:20:07.804 "subsystem": "keyring", 00:20:07.804 "config": [] 00:20:07.804 }, 00:20:07.804 { 00:20:07.804 "subsystem": "iobuf", 00:20:07.804 "config": [ 00:20:07.805 { 00:20:07.805 "method": "iobuf_set_options", 00:20:07.805 "params": { 00:20:07.805 "small_pool_count": 8192, 00:20:07.805 "large_pool_count": 1024, 00:20:07.805 "small_bufsize": 8192, 00:20:07.805 "large_bufsize": 135168 00:20:07.805 } 00:20:07.805 } 00:20:07.805 ] 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "subsystem": "sock", 00:20:07.805 "config": [ 00:20:07.805 { 00:20:07.805 "method": "sock_impl_set_options", 00:20:07.805 "params": { 00:20:07.805 "impl_name": "posix", 00:20:07.805 "recv_buf_size": 2097152, 00:20:07.805 "send_buf_size": 2097152, 00:20:07.805 "enable_recv_pipe": true, 00:20:07.805 "enable_quickack": false, 00:20:07.805 "enable_placement_id": 0, 00:20:07.805 "enable_zerocopy_send_server": true, 00:20:07.805 "enable_zerocopy_send_client": false, 00:20:07.805 "zerocopy_threshold": 0, 00:20:07.805 "tls_version": 0, 00:20:07.805 "enable_ktls": false 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "sock_impl_set_options", 00:20:07.805 "params": { 00:20:07.805 "impl_name": "ssl", 00:20:07.805 "recv_buf_size": 4096, 00:20:07.805 "send_buf_size": 4096, 00:20:07.805 "enable_recv_pipe": true, 00:20:07.805 "enable_quickack": false, 00:20:07.805 "enable_placement_id": 0, 00:20:07.805 "enable_zerocopy_send_server": true, 00:20:07.805 "enable_zerocopy_send_client": false, 00:20:07.805 "zerocopy_threshold": 0, 00:20:07.805 "tls_version": 0, 00:20:07.805 "enable_ktls": false 00:20:07.805 } 00:20:07.805 } 00:20:07.805 ] 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "subsystem": "vmd", 00:20:07.805 "config": [] 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "subsystem": "accel", 00:20:07.805 "config": [ 00:20:07.805 { 00:20:07.805 "method": "accel_set_options", 00:20:07.805 "params": { 00:20:07.805 "small_cache_size": 128, 00:20:07.805 "large_cache_size": 16, 00:20:07.805 "task_count": 2048, 00:20:07.805 "sequence_count": 2048, 00:20:07.805 "buf_count": 2048 00:20:07.805 } 00:20:07.805 } 00:20:07.805 ] 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "subsystem": "bdev", 00:20:07.805 "config": [ 00:20:07.805 { 00:20:07.805 "method": "bdev_set_options", 00:20:07.805 "params": { 00:20:07.805 "bdev_io_pool_size": 65535, 00:20:07.805 "bdev_io_cache_size": 256, 00:20:07.805 "bdev_auto_examine": true, 00:20:07.805 "iobuf_small_cache_size": 128, 00:20:07.805 "iobuf_large_cache_size": 16 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "bdev_raid_set_options", 00:20:07.805 "params": { 00:20:07.805 "process_window_size_kb": 1024 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "bdev_iscsi_set_options", 00:20:07.805 "params": { 00:20:07.805 "timeout_sec": 30 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "bdev_nvme_set_options", 00:20:07.805 "params": { 00:20:07.805 "action_on_timeout": "none", 00:20:07.805 "timeout_us": 0, 00:20:07.805 "timeout_admin_us": 0, 00:20:07.805 "keep_alive_timeout_ms": 10000, 00:20:07.805 "arbitration_burst": 0, 00:20:07.805 "low_priority_weight": 0, 00:20:07.805 "medium_priority_weight": 0, 00:20:07.805 "high_priority_weight": 0, 00:20:07.805 "nvme_adminq_poll_period_us": 10000, 00:20:07.805 "nvme_ioq_poll_period_us": 0, 00:20:07.805 "io_queue_requests": 0, 00:20:07.805 "delay_cmd_submit": true, 00:20:07.805 "transport_retry_count": 4, 00:20:07.805 "bdev_retry_count": 3, 00:20:07.805 "transport_ack_timeout": 0, 00:20:07.805 "ctrlr_loss_timeout_sec": 0, 00:20:07.805 "reconnect_delay_sec": 0, 00:20:07.805 "fast_io_fail_timeout_sec": 0, 00:20:07.805 "disable_auto_failback": false, 00:20:07.805 "generate_uuids": false, 00:20:07.805 "transport_tos": 0, 00:20:07.805 "nvme_error_stat": false, 00:20:07.805 "rdma_srq_size": 0, 00:20:07.805 "io_path_stat": false, 00:20:07.805 "allow_accel_sequence": false, 00:20:07.805 "rdma_max_cq_size": 0, 00:20:07.805 "rdma_cm_event_timeout_ms": 0, 00:20:07.805 "dhchap_digests": [ 00:20:07.805 "sha256", 00:20:07.805 "sha384", 00:20:07.805 "sha512" 00:20:07.805 ], 00:20:07.805 "dhchap_dhgroups": [ 00:20:07.805 "null", 00:20:07.805 "ffdhe2048", 00:20:07.805 "ffdhe3072", 00:20:07.805 "ffdhe4096", 00:20:07.805 "ffdhe6144", 00:20:07.805 "ffdhe8192" 00:20:07.805 ] 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "bdev_nvme_set_hotplug", 00:20:07.805 "params": { 00:20:07.805 "period_us": 100000, 00:20:07.805 "enable": false 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "bdev_malloc_create", 00:20:07.805 "params": { 00:20:07.805 "name": "malloc0", 00:20:07.805 "num_blocks": 8192, 00:20:07.805 "block_size": 4096, 00:20:07.805 "physical_block_size": 4096, 00:20:07.805 "uuid": "80f1081d-f42a-4077-8f0b-58be239b85bf", 00:20:07.805 "optimal_io_boundary": 0 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "bdev_wait_for_examine" 00:20:07.805 } 00:20:07.805 ] 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "subsystem": "nbd", 00:20:07.805 "config": [] 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "subsystem": "scheduler", 00:20:07.805 "config": [ 00:20:07.805 { 00:20:07.805 "method": "framework_set_scheduler", 00:20:07.805 "params": { 00:20:07.805 "name": "static" 00:20:07.805 } 00:20:07.805 } 00:20:07.805 ] 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "subsystem": "nvmf", 00:20:07.805 "config": [ 00:20:07.805 { 00:20:07.805 "method": "nvmf_set_config", 00:20:07.805 "params": { 00:20:07.805 "discovery_filter": "match_any", 00:20:07.805 "admin_cmd_passthru": { 00:20:07.805 "identify_ctrlr": false 00:20:07.805 } 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "nvmf_set_max_subsystems", 00:20:07.805 "params": { 00:20:07.805 "max_subsystems": 1024 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "nvmf_set_crdt", 00:20:07.805 "params": { 00:20:07.805 "crdt1": 0, 00:20:07.805 "crdt2": 0, 00:20:07.805 "crdt3": 0 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "nvmf_create_transport", 00:20:07.805 "params": { 00:20:07.805 "trtype": "TCP", 00:20:07.805 "max_queue_depth": 128, 00:20:07.805 "max_io_qpairs_per_ctrlr": 127, 00:20:07.805 "in_capsule_data_size": 4096, 00:20:07.805 "max_io_size": 131072, 00:20:07.805 "io_unit_size": 131072, 00:20:07.805 "max_aq_depth": 128, 00:20:07.805 "num_shared_buffers": 511, 00:20:07.805 "buf_cache_size": 4294967295, 00:20:07.805 "dif_insert_or_strip": false, 00:20:07.805 "zcopy": false, 00:20:07.805 "c2h_success": false, 00:20:07.805 "sock_priority": 0, 00:20:07.805 "abort_timeout_sec": 1, 00:20:07.805 "ack_timeout": 0, 00:20:07.805 "data_wr_pool_size": 0 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "nvmf_create_subsystem", 00:20:07.805 "params": { 00:20:07.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.805 "allow_any_host": false, 00:20:07.805 "serial_number": "SPDK00000000000001", 00:20:07.805 "model_number": "SPDK bdev Controller", 00:20:07.805 "max_namespaces": 10, 00:20:07.805 "min_cntlid": 1, 00:20:07.805 "max_cntlid": 65519, 00:20:07.805 "ana_reporting": false 00:20:07.805 } 00:20:07.805 }, 00:20:07.805 { 00:20:07.805 "method": "nvmf_subsystem_add_host", 00:20:07.805 "params": { 00:20:07.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.805 "host": "nqn.2016-06.io.spdk:host1", 00:20:07.805 "psk": "/tmp/tmp.eTUxjz2W60" 00:20:07.805 } 00:20:07.805 }, 00:20:07.806 { 00:20:07.806 "method": "nvmf_subsystem_add_ns", 00:20:07.806 "params": { 00:20:07.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.806 "namespace": { 00:20:07.806 "nsid": 1, 00:20:07.806 "bdev_name": "malloc0", 00:20:07.806 "nguid": "80F1081DF42A40778F0B58BE239B85BF", 00:20:07.806 "uuid": "80f1081d-f42a-4077-8f0b-58be239b85bf", 00:20:07.806 "no_auto_visible": false 00:20:07.806 } 00:20:07.806 } 00:20:07.806 }, 00:20:07.806 { 00:20:07.806 "method": "nvmf_subsystem_add_listener", 00:20:07.806 "params": { 00:20:07.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.806 "listen_address": { 00:20:07.806 "trtype": "TCP", 00:20:07.806 "adrfam": "IPv4", 00:20:07.806 "traddr": "10.0.0.2", 00:20:07.806 "trsvcid": "4420" 00:20:07.806 }, 00:20:07.806 "secure_channel": true 00:20:07.806 } 00:20:07.806 } 00:20:07.806 ] 00:20:07.806 } 00:20:07.806 ] 00:20:07.806 }' 00:20:07.806 17:04:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:08.067 17:04:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:08.067 "subsystems": [ 00:20:08.067 { 00:20:08.067 "subsystem": "keyring", 00:20:08.067 "config": [] 00:20:08.067 }, 00:20:08.067 { 00:20:08.067 "subsystem": "iobuf", 00:20:08.067 "config": [ 00:20:08.067 { 00:20:08.067 "method": "iobuf_set_options", 00:20:08.067 "params": { 00:20:08.067 "small_pool_count": 8192, 00:20:08.067 "large_pool_count": 1024, 00:20:08.067 "small_bufsize": 8192, 00:20:08.067 "large_bufsize": 135168 00:20:08.067 } 00:20:08.067 } 00:20:08.067 ] 00:20:08.067 }, 00:20:08.067 { 00:20:08.067 "subsystem": "sock", 00:20:08.067 "config": [ 00:20:08.067 { 00:20:08.067 "method": "sock_impl_set_options", 00:20:08.067 "params": { 00:20:08.067 "impl_name": "posix", 00:20:08.067 "recv_buf_size": 2097152, 00:20:08.067 "send_buf_size": 2097152, 00:20:08.067 "enable_recv_pipe": true, 00:20:08.067 "enable_quickack": false, 00:20:08.067 "enable_placement_id": 0, 00:20:08.067 "enable_zerocopy_send_server": true, 00:20:08.067 "enable_zerocopy_send_client": false, 00:20:08.067 "zerocopy_threshold": 0, 00:20:08.067 "tls_version": 0, 00:20:08.067 "enable_ktls": false 00:20:08.067 } 00:20:08.067 }, 00:20:08.067 { 00:20:08.067 "method": "sock_impl_set_options", 00:20:08.067 "params": { 00:20:08.067 "impl_name": "ssl", 00:20:08.067 "recv_buf_size": 4096, 00:20:08.067 "send_buf_size": 4096, 00:20:08.067 "enable_recv_pipe": true, 00:20:08.067 "enable_quickack": false, 00:20:08.067 "enable_placement_id": 0, 00:20:08.067 "enable_zerocopy_send_server": true, 00:20:08.067 "enable_zerocopy_send_client": false, 00:20:08.067 "zerocopy_threshold": 0, 00:20:08.067 "tls_version": 0, 00:20:08.067 "enable_ktls": false 00:20:08.067 } 00:20:08.067 } 00:20:08.067 ] 00:20:08.067 }, 00:20:08.067 { 00:20:08.067 "subsystem": "vmd", 00:20:08.067 "config": [] 00:20:08.067 }, 00:20:08.067 { 00:20:08.067 "subsystem": "accel", 00:20:08.067 "config": [ 00:20:08.067 { 00:20:08.067 "method": "accel_set_options", 00:20:08.067 "params": { 00:20:08.067 "small_cache_size": 128, 00:20:08.067 "large_cache_size": 16, 00:20:08.067 "task_count": 2048, 00:20:08.067 "sequence_count": 2048, 00:20:08.067 "buf_count": 2048 00:20:08.067 } 00:20:08.067 } 00:20:08.067 ] 00:20:08.067 }, 00:20:08.067 { 00:20:08.068 "subsystem": "bdev", 00:20:08.068 "config": [ 00:20:08.068 { 00:20:08.068 "method": "bdev_set_options", 00:20:08.068 "params": { 00:20:08.068 "bdev_io_pool_size": 65535, 00:20:08.068 "bdev_io_cache_size": 256, 00:20:08.068 "bdev_auto_examine": true, 00:20:08.068 "iobuf_small_cache_size": 128, 00:20:08.068 "iobuf_large_cache_size": 16 00:20:08.068 } 00:20:08.068 }, 00:20:08.068 { 00:20:08.068 "method": "bdev_raid_set_options", 00:20:08.068 "params": { 00:20:08.068 "process_window_size_kb": 1024 00:20:08.068 } 00:20:08.068 }, 00:20:08.068 { 00:20:08.068 "method": "bdev_iscsi_set_options", 00:20:08.068 "params": { 00:20:08.068 "timeout_sec": 30 00:20:08.068 } 00:20:08.068 }, 00:20:08.068 { 00:20:08.068 "method": "bdev_nvme_set_options", 00:20:08.068 "params": { 00:20:08.068 "action_on_timeout": "none", 00:20:08.068 "timeout_us": 0, 00:20:08.068 "timeout_admin_us": 0, 00:20:08.068 "keep_alive_timeout_ms": 10000, 00:20:08.068 "arbitration_burst": 0, 00:20:08.068 "low_priority_weight": 0, 00:20:08.068 "medium_priority_weight": 0, 00:20:08.068 "high_priority_weight": 0, 00:20:08.068 "nvme_adminq_poll_period_us": 10000, 00:20:08.068 "nvme_ioq_poll_period_us": 0, 00:20:08.068 "io_queue_requests": 512, 00:20:08.068 "delay_cmd_submit": true, 00:20:08.068 "transport_retry_count": 4, 00:20:08.068 "bdev_retry_count": 3, 00:20:08.068 "transport_ack_timeout": 0, 00:20:08.068 "ctrlr_loss_timeout_sec": 0, 00:20:08.068 "reconnect_delay_sec": 0, 00:20:08.068 "fast_io_fail_timeout_sec": 0, 00:20:08.068 "disable_auto_failback": false, 00:20:08.068 "generate_uuids": false, 00:20:08.068 "transport_tos": 0, 00:20:08.068 "nvme_error_stat": false, 00:20:08.068 "rdma_srq_size": 0, 00:20:08.068 "io_path_stat": false, 00:20:08.068 "allow_accel_sequence": false, 00:20:08.068 "rdma_max_cq_size": 0, 00:20:08.068 "rdma_cm_event_timeout_ms": 0, 00:20:08.068 "dhchap_digests": [ 00:20:08.068 "sha256", 00:20:08.068 "sha384", 00:20:08.068 "sha512" 00:20:08.068 ], 00:20:08.068 "dhchap_dhgroups": [ 00:20:08.068 "null", 00:20:08.068 "ffdhe2048", 00:20:08.068 "ffdhe3072", 00:20:08.068 "ffdhe4096", 00:20:08.068 "ffdhe6144", 00:20:08.068 "ffdhe8192" 00:20:08.068 ] 00:20:08.068 } 00:20:08.068 }, 00:20:08.068 { 00:20:08.068 "method": "bdev_nvme_attach_controller", 00:20:08.068 "params": { 00:20:08.068 "name": "TLSTEST", 00:20:08.068 "trtype": "TCP", 00:20:08.068 "adrfam": "IPv4", 00:20:08.068 "traddr": "10.0.0.2", 00:20:08.068 "trsvcid": "4420", 00:20:08.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.068 "prchk_reftag": false, 00:20:08.068 "prchk_guard": false, 00:20:08.068 "ctrlr_loss_timeout_sec": 0, 00:20:08.068 "reconnect_delay_sec": 0, 00:20:08.068 "fast_io_fail_timeout_sec": 0, 00:20:08.068 "psk": "/tmp/tmp.eTUxjz2W60", 00:20:08.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.068 "hdgst": false, 00:20:08.068 "ddgst": false 00:20:08.068 } 00:20:08.068 }, 00:20:08.068 { 00:20:08.068 "method": "bdev_nvme_set_hotplug", 00:20:08.068 "params": { 00:20:08.068 "period_us": 100000, 00:20:08.068 "enable": false 00:20:08.068 } 00:20:08.068 }, 00:20:08.068 { 00:20:08.068 "method": "bdev_wait_for_examine" 00:20:08.068 } 00:20:08.068 ] 00:20:08.068 }, 00:20:08.068 { 00:20:08.068 "subsystem": "nbd", 00:20:08.068 "config": [] 00:20:08.068 } 00:20:08.068 ] 00:20:08.068 }' 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1498052 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1498052 ']' 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1498052 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1498052 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1498052' 00:20:08.068 killing process with pid 1498052 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1498052 00:20:08.068 Received shutdown signal, test time was about 10.000000 seconds 00:20:08.068 00:20:08.068 Latency(us) 00:20:08.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.068 =================================================================================================================== 00:20:08.068 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:08.068 [2024-05-15 17:04:46.789034] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1498052 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1497680 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1497680 ']' 00:20:08.068 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1497680 00:20:08.330 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:08.330 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:08.330 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1497680 00:20:08.330 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:08.330 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:08.330 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1497680' 00:20:08.330 killing process with pid 1497680 00:20:08.330 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1497680 00:20:08.330 [2024-05-15 17:04:46.956954] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:08.330 [2024-05-15 17:04:46.956990] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:08.330 17:04:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1497680 00:20:08.330 17:04:47 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:08.330 17:04:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.330 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:08.330 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.330 17:04:47 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:08.330 "subsystems": [ 00:20:08.330 { 00:20:08.330 "subsystem": "keyring", 00:20:08.330 "config": [] 00:20:08.330 }, 00:20:08.330 { 00:20:08.330 "subsystem": "iobuf", 00:20:08.330 "config": [ 00:20:08.330 { 00:20:08.330 "method": "iobuf_set_options", 00:20:08.330 "params": { 00:20:08.330 "small_pool_count": 8192, 00:20:08.330 "large_pool_count": 1024, 00:20:08.330 "small_bufsize": 8192, 00:20:08.330 "large_bufsize": 135168 00:20:08.330 } 00:20:08.330 } 00:20:08.330 ] 00:20:08.330 }, 00:20:08.330 { 00:20:08.330 "subsystem": "sock", 00:20:08.330 "config": [ 00:20:08.330 { 00:20:08.330 "method": "sock_impl_set_options", 00:20:08.330 "params": { 00:20:08.330 "impl_name": "posix", 00:20:08.330 "recv_buf_size": 2097152, 00:20:08.330 "send_buf_size": 2097152, 00:20:08.330 "enable_recv_pipe": true, 00:20:08.330 "enable_quickack": false, 00:20:08.330 "enable_placement_id": 0, 00:20:08.330 "enable_zerocopy_send_server": true, 00:20:08.330 "enable_zerocopy_send_client": false, 00:20:08.330 "zerocopy_threshold": 0, 00:20:08.330 "tls_version": 0, 00:20:08.330 "enable_ktls": false 00:20:08.330 } 00:20:08.330 }, 00:20:08.330 { 00:20:08.330 "method": "sock_impl_set_options", 00:20:08.330 "params": { 00:20:08.330 "impl_name": "ssl", 00:20:08.330 "recv_buf_size": 4096, 00:20:08.330 "send_buf_size": 4096, 00:20:08.330 "enable_recv_pipe": true, 00:20:08.330 "enable_quickack": false, 00:20:08.330 "enable_placement_id": 0, 00:20:08.330 "enable_zerocopy_send_server": true, 00:20:08.330 "enable_zerocopy_send_client": false, 00:20:08.330 "zerocopy_threshold": 0, 00:20:08.330 "tls_version": 0, 00:20:08.330 "enable_ktls": false 00:20:08.330 } 00:20:08.330 } 00:20:08.330 ] 00:20:08.330 }, 00:20:08.330 { 00:20:08.330 "subsystem": "vmd", 00:20:08.330 "config": [] 00:20:08.330 }, 00:20:08.330 { 00:20:08.330 "subsystem": "accel", 00:20:08.330 "config": [ 00:20:08.330 { 00:20:08.330 "method": "accel_set_options", 00:20:08.330 "params": { 00:20:08.330 "small_cache_size": 128, 00:20:08.330 "large_cache_size": 16, 00:20:08.330 "task_count": 2048, 00:20:08.330 "sequence_count": 2048, 00:20:08.330 "buf_count": 2048 00:20:08.330 } 00:20:08.330 } 00:20:08.330 ] 00:20:08.330 }, 00:20:08.330 { 00:20:08.330 "subsystem": "bdev", 00:20:08.330 "config": [ 00:20:08.330 { 00:20:08.330 "method": "bdev_set_options", 00:20:08.330 "params": { 00:20:08.330 "bdev_io_pool_size": 65535, 00:20:08.330 "bdev_io_cache_size": 256, 00:20:08.330 "bdev_auto_examine": true, 00:20:08.330 "iobuf_small_cache_size": 128, 00:20:08.330 "iobuf_large_cache_size": 16 00:20:08.330 } 00:20:08.330 }, 00:20:08.330 { 00:20:08.330 "method": "bdev_raid_set_options", 00:20:08.330 "params": { 00:20:08.330 "process_window_size_kb": 1024 00:20:08.330 } 00:20:08.330 }, 00:20:08.330 { 00:20:08.330 "method": "bdev_iscsi_set_options", 00:20:08.330 "params": { 00:20:08.330 "timeout_sec": 30 00:20:08.330 } 00:20:08.330 }, 00:20:08.330 { 00:20:08.330 "method": "bdev_nvme_set_options", 00:20:08.330 "params": { 00:20:08.330 "action_on_timeout": "none", 00:20:08.330 "timeout_us": 0, 00:20:08.330 "timeout_admin_us": 0, 00:20:08.330 "keep_alive_timeout_ms": 10000, 00:20:08.330 "arbitration_burst": 0, 00:20:08.330 "low_priority_weight": 0, 00:20:08.330 "medium_priority_weight": 0, 00:20:08.330 "high_priority_weight": 0, 00:20:08.330 "nvme_adminq_poll_period_us": 10000, 00:20:08.330 "nvme_ioq_poll_period_us": 0, 00:20:08.330 "io_queue_requests": 0, 00:20:08.330 "delay_cmd_submit": true, 00:20:08.330 "transport_retry_count": 4, 00:20:08.330 "bdev_retry_count": 3, 00:20:08.331 "transport_ack_timeout": 0, 00:20:08.331 "ctrlr_loss_timeout_sec": 0, 00:20:08.331 "reconnect_delay_sec": 0, 00:20:08.331 "fast_io_fail_timeout_sec": 0, 00:20:08.331 "disable_auto_failback": false, 00:20:08.331 "generate_uuids": false, 00:20:08.331 "transport_tos": 0, 00:20:08.331 "nvme_error_stat": false, 00:20:08.331 "rdma_srq_size": 0, 00:20:08.331 "io_path_stat": false, 00:20:08.331 "allow_accel_sequence": false, 00:20:08.331 "rdma_max_cq_size": 0, 00:20:08.331 "rdma_cm_event_timeout_ms": 0, 00:20:08.331 "dhchap_digests": [ 00:20:08.331 "sha256", 00:20:08.331 "sha384", 00:20:08.331 "sha512" 00:20:08.331 ], 00:20:08.331 "dhchap_dhgroups": [ 00:20:08.331 "null", 00:20:08.331 "ffdhe2048", 00:20:08.331 "ffdhe3072", 00:20:08.331 "ffdhe4096", 00:20:08.331 "ffdhe6144", 00:20:08.331 "ffdhe8192" 00:20:08.331 ] 00:20:08.331 } 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "method": "bdev_nvme_set_hotplug", 00:20:08.331 "params": { 00:20:08.331 "period_us": 100000, 00:20:08.331 "enable": false 00:20:08.331 } 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "method": "bdev_malloc_create", 00:20:08.331 "params": { 00:20:08.331 "name": "malloc0", 00:20:08.331 "num_blocks": 8192, 00:20:08.331 "block_size": 4096, 00:20:08.331 "physical_block_size": 4096, 00:20:08.331 "uuid": "80f1081d-f42a-4077-8f0b-58be239b85bf", 00:20:08.331 "optimal_io_boundary": 0 00:20:08.331 } 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "method": "bdev_wait_for_examine" 00:20:08.331 } 00:20:08.331 ] 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "subsystem": "nbd", 00:20:08.331 "config": [] 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "subsystem": "scheduler", 00:20:08.331 "config": [ 00:20:08.331 { 00:20:08.331 "method": "framework_set_scheduler", 00:20:08.331 "params": { 00:20:08.331 "name": "static" 00:20:08.331 } 00:20:08.331 } 00:20:08.331 ] 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "subsystem": "nvmf", 00:20:08.331 "config": [ 00:20:08.331 { 00:20:08.331 "method": "nvmf_set_config", 00:20:08.331 "params": { 00:20:08.331 "discovery_filter": "match_any", 00:20:08.331 "admin_cmd_passthru": { 00:20:08.331 "identify_ctrlr": false 00:20:08.331 } 00:20:08.331 } 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "method": "nvmf_set_max_subsystems", 00:20:08.331 "params": { 00:20:08.331 "max_subsystems": 1024 00:20:08.331 } 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "method": "nvmf_set_crdt", 00:20:08.331 "params": { 00:20:08.331 "crdt1": 0, 00:20:08.331 "crdt2": 0, 00:20:08.331 "crdt3": 0 00:20:08.331 } 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "method": "nvmf_create_transport", 00:20:08.331 "params": { 00:20:08.331 "trtype": "TCP", 00:20:08.331 "max_queue_depth": 128, 00:20:08.331 "max_io_qpairs_per_ctrlr": 127, 00:20:08.331 "in_capsule_data_size": 4096, 00:20:08.331 "max_io_size": 131072, 00:20:08.331 "io_unit_size": 131072, 00:20:08.331 "max_aq_depth": 128, 00:20:08.331 "num_shared_buffers": 511, 00:20:08.331 "buf_cache_size": 4294967295, 00:20:08.331 "dif_insert_or_strip": false, 00:20:08.331 "zcopy": false, 00:20:08.331 "c2h_success": false, 00:20:08.331 "sock_priority": 0, 00:20:08.331 "abort_timeout_sec": 1, 00:20:08.331 "ack_timeout": 0, 00:20:08.331 "data_wr_pool_size": 0 00:20:08.331 } 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "method": "nvmf_create_subsystem", 00:20:08.331 "params": { 00:20:08.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.331 "allow_any_host": false, 00:20:08.331 "serial_number": "SPDK00000000000001", 00:20:08.331 "model_number": "SPDK bdev Controller", 00:20:08.331 "max_namespaces": 10, 00:20:08.331 "min_cntlid": 1, 00:20:08.331 "max_cntlid": 65519, 00:20:08.331 "ana_reporting": false 00:20:08.331 } 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "method": "nvmf_subsystem_add_host", 00:20:08.331 "params": { 00:20:08.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.331 "host": "nqn.2016-06.io.spdk:host1", 00:20:08.331 "psk": "/tmp/tmp.eTUxjz2W60" 00:20:08.331 } 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "method": "nvmf_subsystem_add_ns", 00:20:08.331 "params": { 00:20:08.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.331 "namespace": { 00:20:08.331 "nsid": 1, 00:20:08.331 "bdev_name": "malloc0", 00:20:08.331 "nguid": "80F1081DF42A40778F0B58BE239B85BF", 00:20:08.331 "uuid": "80f1081d-f42a-4077-8f0b-58be239b85bf", 00:20:08.331 "no_auto_visible": false 00:20:08.331 } 00:20:08.331 } 00:20:08.331 }, 00:20:08.331 { 00:20:08.331 "method": "nvmf_subsystem_add_listener", 00:20:08.331 "params": { 00:20:08.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.331 "listen_address": { 00:20:08.331 "trtype": "TCP", 00:20:08.331 "adrfam": "IPv4", 00:20:08.331 "traddr": "10.0.0.2", 00:20:08.331 "trsvcid": "4420" 00:20:08.331 }, 00:20:08.331 "secure_channel": true 00:20:08.331 } 00:20:08.331 } 00:20:08.331 ] 00:20:08.331 } 00:20:08.331 ] 00:20:08.331 }' 00:20:08.331 17:04:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1498486 00:20:08.331 17:04:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1498486 00:20:08.331 17:04:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:08.331 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1498486 ']' 00:20:08.331 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.331 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:08.331 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.331 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:08.331 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.331 [2024-05-15 17:04:47.141283] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:08.331 [2024-05-15 17:04:47.141338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.593 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.593 [2024-05-15 17:04:47.223503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.593 [2024-05-15 17:04:47.278080] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.593 [2024-05-15 17:04:47.278109] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.593 [2024-05-15 17:04:47.278114] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.593 [2024-05-15 17:04:47.278118] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.593 [2024-05-15 17:04:47.278122] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.593 [2024-05-15 17:04:47.278166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.854 [2024-05-15 17:04:47.453324] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.854 [2024-05-15 17:04:47.469295] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:08.854 [2024-05-15 17:04:47.485328] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:08.854 [2024-05-15 17:04:47.485363] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.854 [2024-05-15 17:04:47.497864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1498515 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1498515 /var/tmp/bdevperf.sock 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1498515 ']' 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.116 17:04:47 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:09.116 "subsystems": [ 00:20:09.116 { 00:20:09.116 "subsystem": "keyring", 00:20:09.116 "config": [] 00:20:09.116 }, 00:20:09.116 { 00:20:09.116 "subsystem": "iobuf", 00:20:09.116 "config": [ 00:20:09.116 { 00:20:09.116 "method": "iobuf_set_options", 00:20:09.116 "params": { 00:20:09.116 "small_pool_count": 8192, 00:20:09.116 "large_pool_count": 1024, 00:20:09.116 "small_bufsize": 8192, 00:20:09.116 "large_bufsize": 135168 00:20:09.116 } 00:20:09.116 } 00:20:09.116 ] 00:20:09.116 }, 00:20:09.116 { 00:20:09.116 "subsystem": "sock", 00:20:09.116 "config": [ 00:20:09.116 { 00:20:09.116 "method": "sock_impl_set_options", 00:20:09.116 "params": { 00:20:09.116 "impl_name": "posix", 00:20:09.116 "recv_buf_size": 2097152, 00:20:09.116 "send_buf_size": 2097152, 00:20:09.116 "enable_recv_pipe": true, 00:20:09.116 "enable_quickack": false, 00:20:09.116 "enable_placement_id": 0, 00:20:09.116 "enable_zerocopy_send_server": true, 00:20:09.116 "enable_zerocopy_send_client": false, 00:20:09.116 "zerocopy_threshold": 0, 00:20:09.116 "tls_version": 0, 00:20:09.116 "enable_ktls": false 00:20:09.116 } 00:20:09.116 }, 00:20:09.116 { 00:20:09.116 "method": "sock_impl_set_options", 00:20:09.116 "params": { 00:20:09.116 "impl_name": "ssl", 00:20:09.116 "recv_buf_size": 4096, 00:20:09.116 "send_buf_size": 4096, 00:20:09.116 "enable_recv_pipe": true, 00:20:09.116 "enable_quickack": false, 00:20:09.116 "enable_placement_id": 0, 00:20:09.116 "enable_zerocopy_send_server": true, 00:20:09.116 "enable_zerocopy_send_client": false, 00:20:09.116 "zerocopy_threshold": 0, 00:20:09.116 "tls_version": 0, 00:20:09.116 "enable_ktls": false 00:20:09.116 } 00:20:09.116 } 00:20:09.116 ] 00:20:09.116 }, 00:20:09.116 { 00:20:09.116 "subsystem": "vmd", 00:20:09.116 "config": [] 00:20:09.116 }, 00:20:09.116 { 00:20:09.116 "subsystem": "accel", 00:20:09.116 "config": [ 00:20:09.116 { 00:20:09.116 "method": "accel_set_options", 00:20:09.116 "params": { 00:20:09.116 "small_cache_size": 128, 00:20:09.116 "large_cache_size": 16, 00:20:09.116 "task_count": 2048, 00:20:09.116 "sequence_count": 2048, 00:20:09.116 "buf_count": 2048 00:20:09.116 } 00:20:09.116 } 00:20:09.116 ] 00:20:09.116 }, 00:20:09.116 { 00:20:09.116 "subsystem": "bdev", 00:20:09.116 "config": [ 00:20:09.116 { 00:20:09.116 "method": "bdev_set_options", 00:20:09.116 "params": { 00:20:09.116 "bdev_io_pool_size": 65535, 00:20:09.116 "bdev_io_cache_size": 256, 00:20:09.116 "bdev_auto_examine": true, 00:20:09.116 "iobuf_small_cache_size": 128, 00:20:09.116 "iobuf_large_cache_size": 16 00:20:09.116 } 00:20:09.116 }, 00:20:09.116 { 00:20:09.116 "method": "bdev_raid_set_options", 00:20:09.116 "params": { 00:20:09.116 "process_window_size_kb": 1024 00:20:09.116 } 00:20:09.116 }, 00:20:09.116 { 00:20:09.116 "method": "bdev_iscsi_set_options", 00:20:09.116 "params": { 00:20:09.116 "timeout_sec": 30 00:20:09.116 } 00:20:09.116 }, 00:20:09.116 { 00:20:09.116 "method": "bdev_nvme_set_options", 00:20:09.116 "params": { 00:20:09.116 "action_on_timeout": "none", 00:20:09.116 "timeout_us": 0, 00:20:09.116 "timeout_admin_us": 0, 00:20:09.116 "keep_alive_timeout_ms": 10000, 00:20:09.116 "arbitration_burst": 0, 00:20:09.116 "low_priority_weight": 0, 00:20:09.116 "medium_priority_weight": 0, 00:20:09.116 "high_priority_weight": 0, 00:20:09.116 "nvme_adminq_poll_period_us": 10000, 00:20:09.116 "nvme_ioq_poll_period_us": 0, 00:20:09.116 "io_queue_requests": 512, 00:20:09.116 "delay_cmd_submit": true, 00:20:09.116 "transport_retry_count": 4, 00:20:09.116 "bdev_retry_count": 3, 00:20:09.116 "transport_ack_timeout": 0, 00:20:09.116 "ctrlr_loss_timeout_sec": 0, 00:20:09.116 "reconnect_delay_sec": 0, 00:20:09.117 "fast_io_fail_timeout_sec": 0, 00:20:09.117 "disable_auto_failback": false, 00:20:09.117 "generate_uuids": false, 00:20:09.117 "transport_tos": 0, 00:20:09.117 "nvme_error_stat": false, 00:20:09.117 "rdma_srq_size": 0, 00:20:09.117 "io_path_stat": false, 00:20:09.117 "allow_accel_sequence": false, 00:20:09.117 "rdma_max_cq_size": 0, 00:20:09.117 "rdma_cm_event_timeout_ms": 0, 00:20:09.117 "dhchap_digests": [ 00:20:09.117 "sha256", 00:20:09.117 "sha384", 00:20:09.117 "sha512" 00:20:09.117 ], 00:20:09.117 "dhchap_dhgroups": [ 00:20:09.117 "null", 00:20:09.117 "ffdhe2048", 00:20:09.117 "ffdhe3072", 00:20:09.117 "ffdhe4096", 00:20:09.117 "ffdhe6144", 00:20:09.117 "ffdhe8192" 00:20:09.117 ] 00:20:09.117 } 00:20:09.117 }, 00:20:09.117 { 00:20:09.117 "method": "bdev_nvme_attach_controller", 00:20:09.117 "params": { 00:20:09.117 "name": "TLSTEST", 00:20:09.117 "trtype": "TCP", 00:20:09.117 "adrfam": "IPv4", 00:20:09.117 "traddr": "10.0.0.2", 00:20:09.117 "trsvcid": "4420", 00:20:09.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.117 "prchk_reftag": false, 00:20:09.117 "prchk_guard": false, 00:20:09.117 "ctrlr_loss_timeout_sec": 0, 00:20:09.117 "reconnect_delay_sec": 0, 00:20:09.117 "fast_io_fail_timeout_sec": 0, 00:20:09.117 "psk": "/tmp/tmp.eTUxjz2W60", 00:20:09.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.117 "hdgst": false, 00:20:09.117 "ddgst": false 00:20:09.117 } 00:20:09.117 }, 00:20:09.117 { 00:20:09.117 "method": "bdev_nvme_set_hotplug", 00:20:09.117 "params": { 00:20:09.117 "period_us": 100000, 00:20:09.117 "enable": false 00:20:09.117 } 00:20:09.117 }, 00:20:09.117 { 00:20:09.117 "method": "bdev_wait_for_examine" 00:20:09.117 } 00:20:09.117 ] 00:20:09.117 }, 00:20:09.117 { 00:20:09.117 "subsystem": "nbd", 00:20:09.117 "config": [] 00:20:09.117 } 00:20:09.117 ] 00:20:09.117 }' 00:20:09.377 [2024-05-15 17:04:47.983248] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:09.377 [2024-05-15 17:04:47.983299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498515 ] 00:20:09.377 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.377 [2024-05-15 17:04:48.033062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.377 [2024-05-15 17:04:48.085619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.377 [2024-05-15 17:04:48.202518] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.377 [2024-05-15 17:04:48.202584] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:09.947 17:04:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:09.947 17:04:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:09.947 17:04:48 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:10.208 Running I/O for 10 seconds... 00:20:20.263 00:20:20.263 Latency(us) 00:20:20.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.263 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:20.263 Verification LBA range: start 0x0 length 0x2000 00:20:20.263 TLSTESTn1 : 10.01 5610.59 21.92 0.00 0.00 22783.15 4587.52 36481.71 00:20:20.263 =================================================================================================================== 00:20:20.263 Total : 5610.59 21.92 0.00 0.00 22783.15 4587.52 36481.71 00:20:20.263 0 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1498515 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1498515 ']' 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1498515 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1498515 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1498515' 00:20:20.263 killing process with pid 1498515 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1498515 00:20:20.263 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.263 00:20:20.263 Latency(us) 00:20:20.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.263 =================================================================================================================== 00:20:20.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.263 [2024-05-15 17:04:58.957460] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:20.263 17:04:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1498515 00:20:20.263 17:04:59 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1498486 00:20:20.263 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1498486 ']' 00:20:20.263 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1498486 00:20:20.263 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:20.263 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:20.263 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1498486 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1498486' 00:20:20.524 killing process with pid 1498486 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1498486 00:20:20.524 [2024-05-15 17:04:59.123145] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:20.524 [2024-05-15 17:04:59.123179] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1498486 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1500825 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1500825 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1500825 ']' 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:20.524 17:04:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.524 [2024-05-15 17:04:59.301326] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:20.524 [2024-05-15 17:04:59.301377] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.524 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.785 [2024-05-15 17:04:59.364884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.785 [2024-05-15 17:04:59.426611] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.785 [2024-05-15 17:04:59.426649] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.785 [2024-05-15 17:04:59.426656] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.785 [2024-05-15 17:04:59.426662] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.785 [2024-05-15 17:04:59.426668] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.785 [2024-05-15 17:04:59.426688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.356 17:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:21.356 17:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:21.356 17:05:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:21.356 17:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.356 17:05:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.356 17:05:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.356 17:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.eTUxjz2W60 00:20:21.356 17:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eTUxjz2W60 00:20:21.356 17:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:21.616 [2024-05-15 17:05:00.253344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.616 17:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:21.877 17:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:21.877 [2024-05-15 17:05:00.598194] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:21.877 [2024-05-15 17:05:00.598245] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.877 [2024-05-15 17:05:00.598430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.877 17:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:22.137 malloc0 00:20:22.137 17:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:22.397 17:05:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eTUxjz2W60 00:20:22.397 [2024-05-15 17:05:01.114381] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:22.397 17:05:01 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:22.397 17:05:01 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1501185 00:20:22.397 17:05:01 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:22.397 17:05:01 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1501185 /var/tmp/bdevperf.sock 00:20:22.397 17:05:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1501185 ']' 00:20:22.397 17:05:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.397 17:05:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:22.397 17:05:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.397 17:05:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:22.397 17:05:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.397 [2024-05-15 17:05:01.191072] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:22.397 [2024-05-15 17:05:01.191122] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501185 ] 00:20:22.397 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.657 [2024-05-15 17:05:01.264598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.657 [2024-05-15 17:05:01.318203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.228 17:05:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:23.228 17:05:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:23.228 17:05:01 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eTUxjz2W60 00:20:23.488 17:05:02 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:23.488 [2024-05-15 17:05:02.236344] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.488 nvme0n1 00:20:23.488 17:05:02 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.747 Running I/O for 1 seconds... 00:20:24.686 00:20:24.686 Latency(us) 00:20:24.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.686 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:24.686 Verification LBA range: start 0x0 length 0x2000 00:20:24.686 nvme0n1 : 1.01 5698.88 22.26 0.00 0.00 22276.13 5379.41 34734.08 00:20:24.686 =================================================================================================================== 00:20:24.686 Total : 5698.88 22.26 0.00 0.00 22276.13 5379.41 34734.08 00:20:24.686 0 00:20:24.686 17:05:03 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1501185 00:20:24.686 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1501185 ']' 00:20:24.686 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1501185 00:20:24.686 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:24.686 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:24.686 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1501185 00:20:24.686 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:24.686 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:24.686 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1501185' 00:20:24.686 killing process with pid 1501185 00:20:24.686 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1501185 00:20:24.686 Received shutdown signal, test time was about 1.000000 seconds 00:20:24.686 00:20:24.686 Latency(us) 00:20:24.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.686 =================================================================================================================== 00:20:24.686 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.686 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1501185 00:20:24.946 17:05:03 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1500825 00:20:24.946 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1500825 ']' 00:20:24.946 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1500825 00:20:24.946 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:24.946 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:24.946 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1500825 00:20:24.946 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:24.946 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:24.946 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1500825' 00:20:24.946 killing process with pid 1500825 00:20:24.946 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1500825 00:20:24.946 [2024-05-15 17:05:03.653607] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:24.946 [2024-05-15 17:05:03.653650] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:24.946 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1500825 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1501596 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1501596 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1501596 ']' 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:25.207 17:05:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.207 [2024-05-15 17:05:03.854799] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:25.207 [2024-05-15 17:05:03.854851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.207 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.207 [2024-05-15 17:05:03.918909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.207 [2024-05-15 17:05:03.982544] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.207 [2024-05-15 17:05:03.982588] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.207 [2024-05-15 17:05:03.982595] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.207 [2024-05-15 17:05:03.982601] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.207 [2024-05-15 17:05:03.982607] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.207 [2024-05-15 17:05:03.982626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.147 [2024-05-15 17:05:04.669118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.147 malloc0 00:20:26.147 [2024-05-15 17:05:04.695897] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:26.147 [2024-05-15 17:05:04.695946] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:26.147 [2024-05-15 17:05:04.696122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1501880 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1501880 /var/tmp/bdevperf.sock 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1501880 ']' 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:26.147 17:05:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.147 [2024-05-15 17:05:04.771494] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:26.147 [2024-05-15 17:05:04.771540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501880 ] 00:20:26.147 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.147 [2024-05-15 17:05:04.844331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.147 [2024-05-15 17:05:04.897449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.718 17:05:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:26.718 17:05:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:26.718 17:05:05 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eTUxjz2W60 00:20:26.979 17:05:05 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:27.239 [2024-05-15 17:05:05.815692] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.239 nvme0n1 00:20:27.239 17:05:05 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:27.239 Running I/O for 1 seconds... 00:20:28.181 00:20:28.181 Latency(us) 00:20:28.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.181 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:28.181 Verification LBA range: start 0x0 length 0x2000 00:20:28.181 nvme0n1 : 1.02 4591.62 17.94 0.00 0.00 27621.81 6198.61 31675.73 00:20:28.181 =================================================================================================================== 00:20:28.181 Total : 4591.62 17.94 0.00 0.00 27621.81 6198.61 31675.73 00:20:28.181 0 00:20:28.443 17:05:07 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:28.443 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.443 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.443 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.443 17:05:07 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:28.443 "subsystems": [ 00:20:28.443 { 00:20:28.443 "subsystem": "keyring", 00:20:28.443 "config": [ 00:20:28.443 { 00:20:28.443 "method": "keyring_file_add_key", 00:20:28.443 "params": { 00:20:28.443 "name": "key0", 00:20:28.443 "path": "/tmp/tmp.eTUxjz2W60" 00:20:28.443 } 00:20:28.443 } 00:20:28.443 ] 00:20:28.443 }, 00:20:28.443 { 00:20:28.443 "subsystem": "iobuf", 00:20:28.443 "config": [ 00:20:28.443 { 00:20:28.443 "method": "iobuf_set_options", 00:20:28.443 "params": { 00:20:28.443 "small_pool_count": 8192, 00:20:28.443 "large_pool_count": 1024, 00:20:28.443 "small_bufsize": 8192, 00:20:28.443 "large_bufsize": 135168 00:20:28.443 } 00:20:28.443 } 00:20:28.443 ] 00:20:28.443 }, 00:20:28.443 { 00:20:28.443 "subsystem": "sock", 00:20:28.443 "config": [ 00:20:28.443 { 00:20:28.443 "method": "sock_impl_set_options", 00:20:28.443 "params": { 00:20:28.443 "impl_name": "posix", 00:20:28.443 "recv_buf_size": 2097152, 00:20:28.443 "send_buf_size": 2097152, 00:20:28.443 "enable_recv_pipe": true, 00:20:28.443 "enable_quickack": false, 00:20:28.443 "enable_placement_id": 0, 00:20:28.443 "enable_zerocopy_send_server": true, 00:20:28.443 "enable_zerocopy_send_client": false, 00:20:28.443 "zerocopy_threshold": 0, 00:20:28.443 "tls_version": 0, 00:20:28.443 "enable_ktls": false 00:20:28.443 } 00:20:28.443 }, 00:20:28.443 { 00:20:28.443 "method": "sock_impl_set_options", 00:20:28.443 "params": { 00:20:28.443 "impl_name": "ssl", 00:20:28.443 "recv_buf_size": 4096, 00:20:28.443 "send_buf_size": 4096, 00:20:28.443 "enable_recv_pipe": true, 00:20:28.443 "enable_quickack": false, 00:20:28.443 "enable_placement_id": 0, 00:20:28.443 "enable_zerocopy_send_server": true, 00:20:28.443 "enable_zerocopy_send_client": false, 00:20:28.443 "zerocopy_threshold": 0, 00:20:28.443 "tls_version": 0, 00:20:28.443 "enable_ktls": false 00:20:28.443 } 00:20:28.443 } 00:20:28.443 ] 00:20:28.443 }, 00:20:28.443 { 00:20:28.443 "subsystem": "vmd", 00:20:28.443 "config": [] 00:20:28.443 }, 00:20:28.443 { 00:20:28.443 "subsystem": "accel", 00:20:28.443 "config": [ 00:20:28.443 { 00:20:28.443 "method": "accel_set_options", 00:20:28.443 "params": { 00:20:28.443 "small_cache_size": 128, 00:20:28.443 "large_cache_size": 16, 00:20:28.443 "task_count": 2048, 00:20:28.443 "sequence_count": 2048, 00:20:28.443 "buf_count": 2048 00:20:28.443 } 00:20:28.443 } 00:20:28.443 ] 00:20:28.443 }, 00:20:28.443 { 00:20:28.443 "subsystem": "bdev", 00:20:28.443 "config": [ 00:20:28.443 { 00:20:28.443 "method": "bdev_set_options", 00:20:28.443 "params": { 00:20:28.443 "bdev_io_pool_size": 65535, 00:20:28.443 "bdev_io_cache_size": 256, 00:20:28.443 "bdev_auto_examine": true, 00:20:28.443 "iobuf_small_cache_size": 128, 00:20:28.443 "iobuf_large_cache_size": 16 00:20:28.443 } 00:20:28.443 }, 00:20:28.443 { 00:20:28.443 "method": "bdev_raid_set_options", 00:20:28.443 "params": { 00:20:28.443 "process_window_size_kb": 1024 00:20:28.443 } 00:20:28.443 }, 00:20:28.443 { 00:20:28.443 "method": "bdev_iscsi_set_options", 00:20:28.443 "params": { 00:20:28.443 "timeout_sec": 30 00:20:28.443 } 00:20:28.443 }, 00:20:28.443 { 00:20:28.443 "method": "bdev_nvme_set_options", 00:20:28.443 "params": { 00:20:28.443 "action_on_timeout": "none", 00:20:28.443 "timeout_us": 0, 00:20:28.443 "timeout_admin_us": 0, 00:20:28.444 "keep_alive_timeout_ms": 10000, 00:20:28.444 "arbitration_burst": 0, 00:20:28.444 "low_priority_weight": 0, 00:20:28.444 "medium_priority_weight": 0, 00:20:28.444 "high_priority_weight": 0, 00:20:28.444 "nvme_adminq_poll_period_us": 10000, 00:20:28.444 "nvme_ioq_poll_period_us": 0, 00:20:28.444 "io_queue_requests": 0, 00:20:28.444 "delay_cmd_submit": true, 00:20:28.444 "transport_retry_count": 4, 00:20:28.444 "bdev_retry_count": 3, 00:20:28.444 "transport_ack_timeout": 0, 00:20:28.444 "ctrlr_loss_timeout_sec": 0, 00:20:28.444 "reconnect_delay_sec": 0, 00:20:28.444 "fast_io_fail_timeout_sec": 0, 00:20:28.444 "disable_auto_failback": false, 00:20:28.444 "generate_uuids": false, 00:20:28.444 "transport_tos": 0, 00:20:28.444 "nvme_error_stat": false, 00:20:28.444 "rdma_srq_size": 0, 00:20:28.444 "io_path_stat": false, 00:20:28.444 "allow_accel_sequence": false, 00:20:28.444 "rdma_max_cq_size": 0, 00:20:28.444 "rdma_cm_event_timeout_ms": 0, 00:20:28.444 "dhchap_digests": [ 00:20:28.444 "sha256", 00:20:28.444 "sha384", 00:20:28.444 "sha512" 00:20:28.444 ], 00:20:28.444 "dhchap_dhgroups": [ 00:20:28.444 "null", 00:20:28.444 "ffdhe2048", 00:20:28.444 "ffdhe3072", 00:20:28.444 "ffdhe4096", 00:20:28.444 "ffdhe6144", 00:20:28.444 "ffdhe8192" 00:20:28.444 ] 00:20:28.444 } 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "method": "bdev_nvme_set_hotplug", 00:20:28.444 "params": { 00:20:28.444 "period_us": 100000, 00:20:28.444 "enable": false 00:20:28.444 } 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "method": "bdev_malloc_create", 00:20:28.444 "params": { 00:20:28.444 "name": "malloc0", 00:20:28.444 "num_blocks": 8192, 00:20:28.444 "block_size": 4096, 00:20:28.444 "physical_block_size": 4096, 00:20:28.444 "uuid": "bd20d004-bd2d-4a2c-91d3-6910f0164dbf", 00:20:28.444 "optimal_io_boundary": 0 00:20:28.444 } 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "method": "bdev_wait_for_examine" 00:20:28.444 } 00:20:28.444 ] 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "subsystem": "nbd", 00:20:28.444 "config": [] 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "subsystem": "scheduler", 00:20:28.444 "config": [ 00:20:28.444 { 00:20:28.444 "method": "framework_set_scheduler", 00:20:28.444 "params": { 00:20:28.444 "name": "static" 00:20:28.444 } 00:20:28.444 } 00:20:28.444 ] 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "subsystem": "nvmf", 00:20:28.444 "config": [ 00:20:28.444 { 00:20:28.444 "method": "nvmf_set_config", 00:20:28.444 "params": { 00:20:28.444 "discovery_filter": "match_any", 00:20:28.444 "admin_cmd_passthru": { 00:20:28.444 "identify_ctrlr": false 00:20:28.444 } 00:20:28.444 } 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "method": "nvmf_set_max_subsystems", 00:20:28.444 "params": { 00:20:28.444 "max_subsystems": 1024 00:20:28.444 } 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "method": "nvmf_set_crdt", 00:20:28.444 "params": { 00:20:28.444 "crdt1": 0, 00:20:28.444 "crdt2": 0, 00:20:28.444 "crdt3": 0 00:20:28.444 } 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "method": "nvmf_create_transport", 00:20:28.444 "params": { 00:20:28.444 "trtype": "TCP", 00:20:28.444 "max_queue_depth": 128, 00:20:28.444 "max_io_qpairs_per_ctrlr": 127, 00:20:28.444 "in_capsule_data_size": 4096, 00:20:28.444 "max_io_size": 131072, 00:20:28.444 "io_unit_size": 131072, 00:20:28.444 "max_aq_depth": 128, 00:20:28.444 "num_shared_buffers": 511, 00:20:28.444 "buf_cache_size": 4294967295, 00:20:28.444 "dif_insert_or_strip": false, 00:20:28.444 "zcopy": false, 00:20:28.444 "c2h_success": false, 00:20:28.444 "sock_priority": 0, 00:20:28.444 "abort_timeout_sec": 1, 00:20:28.444 "ack_timeout": 0, 00:20:28.444 "data_wr_pool_size": 0 00:20:28.444 } 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "method": "nvmf_create_subsystem", 00:20:28.444 "params": { 00:20:28.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.444 "allow_any_host": false, 00:20:28.444 "serial_number": "00000000000000000000", 00:20:28.444 "model_number": "SPDK bdev Controller", 00:20:28.444 "max_namespaces": 32, 00:20:28.444 "min_cntlid": 1, 00:20:28.444 "max_cntlid": 65519, 00:20:28.444 "ana_reporting": false 00:20:28.444 } 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "method": "nvmf_subsystem_add_host", 00:20:28.444 "params": { 00:20:28.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.444 "host": "nqn.2016-06.io.spdk:host1", 00:20:28.444 "psk": "key0" 00:20:28.444 } 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "method": "nvmf_subsystem_add_ns", 00:20:28.444 "params": { 00:20:28.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.444 "namespace": { 00:20:28.444 "nsid": 1, 00:20:28.444 "bdev_name": "malloc0", 00:20:28.444 "nguid": "BD20D004BD2D4A2C91D36910F0164DBF", 00:20:28.444 "uuid": "bd20d004-bd2d-4a2c-91d3-6910f0164dbf", 00:20:28.444 "no_auto_visible": false 00:20:28.444 } 00:20:28.444 } 00:20:28.444 }, 00:20:28.444 { 00:20:28.444 "method": "nvmf_subsystem_add_listener", 00:20:28.444 "params": { 00:20:28.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.444 "listen_address": { 00:20:28.444 "trtype": "TCP", 00:20:28.444 "adrfam": "IPv4", 00:20:28.444 "traddr": "10.0.0.2", 00:20:28.444 "trsvcid": "4420" 00:20:28.444 }, 00:20:28.444 "secure_channel": true 00:20:28.444 } 00:20:28.444 } 00:20:28.444 ] 00:20:28.444 } 00:20:28.444 ] 00:20:28.444 }' 00:20:28.444 17:05:07 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:28.706 "subsystems": [ 00:20:28.706 { 00:20:28.706 "subsystem": "keyring", 00:20:28.706 "config": [ 00:20:28.706 { 00:20:28.706 "method": "keyring_file_add_key", 00:20:28.706 "params": { 00:20:28.706 "name": "key0", 00:20:28.706 "path": "/tmp/tmp.eTUxjz2W60" 00:20:28.706 } 00:20:28.706 } 00:20:28.706 ] 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "subsystem": "iobuf", 00:20:28.706 "config": [ 00:20:28.706 { 00:20:28.706 "method": "iobuf_set_options", 00:20:28.706 "params": { 00:20:28.706 "small_pool_count": 8192, 00:20:28.706 "large_pool_count": 1024, 00:20:28.706 "small_bufsize": 8192, 00:20:28.706 "large_bufsize": 135168 00:20:28.706 } 00:20:28.706 } 00:20:28.706 ] 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "subsystem": "sock", 00:20:28.706 "config": [ 00:20:28.706 { 00:20:28.706 "method": "sock_impl_set_options", 00:20:28.706 "params": { 00:20:28.706 "impl_name": "posix", 00:20:28.706 "recv_buf_size": 2097152, 00:20:28.706 "send_buf_size": 2097152, 00:20:28.706 "enable_recv_pipe": true, 00:20:28.706 "enable_quickack": false, 00:20:28.706 "enable_placement_id": 0, 00:20:28.706 "enable_zerocopy_send_server": true, 00:20:28.706 "enable_zerocopy_send_client": false, 00:20:28.706 "zerocopy_threshold": 0, 00:20:28.706 "tls_version": 0, 00:20:28.706 "enable_ktls": false 00:20:28.706 } 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "method": "sock_impl_set_options", 00:20:28.706 "params": { 00:20:28.706 "impl_name": "ssl", 00:20:28.706 "recv_buf_size": 4096, 00:20:28.706 "send_buf_size": 4096, 00:20:28.706 "enable_recv_pipe": true, 00:20:28.706 "enable_quickack": false, 00:20:28.706 "enable_placement_id": 0, 00:20:28.706 "enable_zerocopy_send_server": true, 00:20:28.706 "enable_zerocopy_send_client": false, 00:20:28.706 "zerocopy_threshold": 0, 00:20:28.706 "tls_version": 0, 00:20:28.706 "enable_ktls": false 00:20:28.706 } 00:20:28.706 } 00:20:28.706 ] 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "subsystem": "vmd", 00:20:28.706 "config": [] 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "subsystem": "accel", 00:20:28.706 "config": [ 00:20:28.706 { 00:20:28.706 "method": "accel_set_options", 00:20:28.706 "params": { 00:20:28.706 "small_cache_size": 128, 00:20:28.706 "large_cache_size": 16, 00:20:28.706 "task_count": 2048, 00:20:28.706 "sequence_count": 2048, 00:20:28.706 "buf_count": 2048 00:20:28.706 } 00:20:28.706 } 00:20:28.706 ] 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "subsystem": "bdev", 00:20:28.706 "config": [ 00:20:28.706 { 00:20:28.706 "method": "bdev_set_options", 00:20:28.706 "params": { 00:20:28.706 "bdev_io_pool_size": 65535, 00:20:28.706 "bdev_io_cache_size": 256, 00:20:28.706 "bdev_auto_examine": true, 00:20:28.706 "iobuf_small_cache_size": 128, 00:20:28.706 "iobuf_large_cache_size": 16 00:20:28.706 } 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "method": "bdev_raid_set_options", 00:20:28.706 "params": { 00:20:28.706 "process_window_size_kb": 1024 00:20:28.706 } 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "method": "bdev_iscsi_set_options", 00:20:28.706 "params": { 00:20:28.706 "timeout_sec": 30 00:20:28.706 } 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "method": "bdev_nvme_set_options", 00:20:28.706 "params": { 00:20:28.706 "action_on_timeout": "none", 00:20:28.706 "timeout_us": 0, 00:20:28.706 "timeout_admin_us": 0, 00:20:28.706 "keep_alive_timeout_ms": 10000, 00:20:28.706 "arbitration_burst": 0, 00:20:28.706 "low_priority_weight": 0, 00:20:28.706 "medium_priority_weight": 0, 00:20:28.706 "high_priority_weight": 0, 00:20:28.706 "nvme_adminq_poll_period_us": 10000, 00:20:28.706 "nvme_ioq_poll_period_us": 0, 00:20:28.706 "io_queue_requests": 512, 00:20:28.706 "delay_cmd_submit": true, 00:20:28.706 "transport_retry_count": 4, 00:20:28.706 "bdev_retry_count": 3, 00:20:28.706 "transport_ack_timeout": 0, 00:20:28.706 "ctrlr_loss_timeout_sec": 0, 00:20:28.706 "reconnect_delay_sec": 0, 00:20:28.706 "fast_io_fail_timeout_sec": 0, 00:20:28.706 "disable_auto_failback": false, 00:20:28.706 "generate_uuids": false, 00:20:28.706 "transport_tos": 0, 00:20:28.706 "nvme_error_stat": false, 00:20:28.706 "rdma_srq_size": 0, 00:20:28.706 "io_path_stat": false, 00:20:28.706 "allow_accel_sequence": false, 00:20:28.706 "rdma_max_cq_size": 0, 00:20:28.706 "rdma_cm_event_timeout_ms": 0, 00:20:28.706 "dhchap_digests": [ 00:20:28.706 "sha256", 00:20:28.706 "sha384", 00:20:28.706 "sha512" 00:20:28.706 ], 00:20:28.706 "dhchap_dhgroups": [ 00:20:28.706 "null", 00:20:28.706 "ffdhe2048", 00:20:28.706 "ffdhe3072", 00:20:28.706 "ffdhe4096", 00:20:28.706 "ffdhe6144", 00:20:28.706 "ffdhe8192" 00:20:28.706 ] 00:20:28.706 } 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "method": "bdev_nvme_attach_controller", 00:20:28.706 "params": { 00:20:28.706 "name": "nvme0", 00:20:28.706 "trtype": "TCP", 00:20:28.706 "adrfam": "IPv4", 00:20:28.706 "traddr": "10.0.0.2", 00:20:28.706 "trsvcid": "4420", 00:20:28.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.706 "prchk_reftag": false, 00:20:28.706 "prchk_guard": false, 00:20:28.706 "ctrlr_loss_timeout_sec": 0, 00:20:28.706 "reconnect_delay_sec": 0, 00:20:28.706 "fast_io_fail_timeout_sec": 0, 00:20:28.706 "psk": "key0", 00:20:28.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.706 "hdgst": false, 00:20:28.706 "ddgst": false 00:20:28.706 } 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "method": "bdev_nvme_set_hotplug", 00:20:28.706 "params": { 00:20:28.706 "period_us": 100000, 00:20:28.706 "enable": false 00:20:28.706 } 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "method": "bdev_enable_histogram", 00:20:28.706 "params": { 00:20:28.706 "name": "nvme0n1", 00:20:28.706 "enable": true 00:20:28.706 } 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "method": "bdev_wait_for_examine" 00:20:28.706 } 00:20:28.706 ] 00:20:28.706 }, 00:20:28.706 { 00:20:28.706 "subsystem": "nbd", 00:20:28.706 "config": [] 00:20:28.706 } 00:20:28.706 ] 00:20:28.706 }' 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1501880 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1501880 ']' 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1501880 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1501880 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1501880' 00:20:28.706 killing process with pid 1501880 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1501880 00:20:28.706 Received shutdown signal, test time was about 1.000000 seconds 00:20:28.706 00:20:28.706 Latency(us) 00:20:28.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.706 =================================================================================================================== 00:20:28.706 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.706 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1501880 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1501596 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1501596 ']' 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1501596 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1501596 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1501596' 00:20:28.968 killing process with pid 1501596 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1501596 00:20:28.968 [2024-05-15 17:05:07.602892] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1501596 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:28.968 17:05:07 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:28.968 "subsystems": [ 00:20:28.968 { 00:20:28.968 "subsystem": "keyring", 00:20:28.968 "config": [ 00:20:28.968 { 00:20:28.968 "method": "keyring_file_add_key", 00:20:28.968 "params": { 00:20:28.968 "name": "key0", 00:20:28.968 "path": "/tmp/tmp.eTUxjz2W60" 00:20:28.968 } 00:20:28.968 } 00:20:28.968 ] 00:20:28.968 }, 00:20:28.968 { 00:20:28.968 "subsystem": "iobuf", 00:20:28.968 "config": [ 00:20:28.968 { 00:20:28.968 "method": "iobuf_set_options", 00:20:28.968 "params": { 00:20:28.968 "small_pool_count": 8192, 00:20:28.968 "large_pool_count": 1024, 00:20:28.968 "small_bufsize": 8192, 00:20:28.968 "large_bufsize": 135168 00:20:28.968 } 00:20:28.968 } 00:20:28.968 ] 00:20:28.968 }, 00:20:28.968 { 00:20:28.968 "subsystem": "sock", 00:20:28.968 "config": [ 00:20:28.968 { 00:20:28.968 "method": "sock_impl_set_options", 00:20:28.968 "params": { 00:20:28.968 "impl_name": "posix", 00:20:28.968 "recv_buf_size": 2097152, 00:20:28.968 "send_buf_size": 2097152, 00:20:28.968 "enable_recv_pipe": true, 00:20:28.968 "enable_quickack": false, 00:20:28.968 "enable_placement_id": 0, 00:20:28.968 "enable_zerocopy_send_server": true, 00:20:28.968 "enable_zerocopy_send_client": false, 00:20:28.968 "zerocopy_threshold": 0, 00:20:28.968 "tls_version": 0, 00:20:28.968 "enable_ktls": false 00:20:28.968 } 00:20:28.968 }, 00:20:28.968 { 00:20:28.968 "method": "sock_impl_set_options", 00:20:28.968 "params": { 00:20:28.968 "impl_name": "ssl", 00:20:28.968 "recv_buf_size": 4096, 00:20:28.968 "send_buf_size": 4096, 00:20:28.968 "enable_recv_pipe": true, 00:20:28.968 "enable_quickack": false, 00:20:28.968 "enable_placement_id": 0, 00:20:28.968 "enable_zerocopy_send_server": true, 00:20:28.968 "enable_zerocopy_send_client": false, 00:20:28.968 "zerocopy_threshold": 0, 00:20:28.968 "tls_version": 0, 00:20:28.968 "enable_ktls": false 00:20:28.968 } 00:20:28.968 } 00:20:28.968 ] 00:20:28.968 }, 00:20:28.968 { 00:20:28.968 "subsystem": "vmd", 00:20:28.968 "config": [] 00:20:28.968 }, 00:20:28.968 { 00:20:28.968 "subsystem": "accel", 00:20:28.968 "config": [ 00:20:28.968 { 00:20:28.968 "method": "accel_set_options", 00:20:28.968 "params": { 00:20:28.968 "small_cache_size": 128, 00:20:28.968 "large_cache_size": 16, 00:20:28.968 "task_count": 2048, 00:20:28.968 "sequence_count": 2048, 00:20:28.968 "buf_count": 2048 00:20:28.968 } 00:20:28.968 } 00:20:28.968 ] 00:20:28.968 }, 00:20:28.968 { 00:20:28.968 "subsystem": "bdev", 00:20:28.968 "config": [ 00:20:28.968 { 00:20:28.968 "method": "bdev_set_options", 00:20:28.968 "params": { 00:20:28.968 "bdev_io_pool_size": 65535, 00:20:28.968 "bdev_io_cache_size": 256, 00:20:28.968 "bdev_auto_examine": true, 00:20:28.968 "iobuf_small_cache_size": 128, 00:20:28.968 "iobuf_large_cache_size": 16 00:20:28.968 } 00:20:28.968 }, 00:20:28.968 { 00:20:28.968 "method": "bdev_raid_set_options", 00:20:28.968 "params": { 00:20:28.968 "process_window_size_kb": 1024 00:20:28.968 } 00:20:28.968 }, 00:20:28.968 { 00:20:28.968 "method": "bdev_iscsi_set_options", 00:20:28.969 "params": { 00:20:28.969 "timeout_sec": 30 00:20:28.969 } 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "method": "bdev_nvme_set_options", 00:20:28.969 "params": { 00:20:28.969 "action_on_timeout": "none", 00:20:28.969 "timeout_us": 0, 00:20:28.969 "timeout_admin_us": 0, 00:20:28.969 "keep_alive_timeout_ms": 10000, 00:20:28.969 "arbitration_burst": 0, 00:20:28.969 "low_priority_weight": 0, 00:20:28.969 "medium_priority_weight": 0, 00:20:28.969 "high_priority_weight": 0, 00:20:28.969 "nvme_adminq_poll_period_us": 10000, 00:20:28.969 "nvme_ioq_poll_period_us": 0, 00:20:28.969 "io_queue_requests": 0, 00:20:28.969 "delay_cmd_submit": true, 00:20:28.969 "transport_retry_count": 4, 00:20:28.969 "bdev_retry_count": 3, 00:20:28.969 "transport_ack_timeout": 0, 00:20:28.969 "ctrlr_loss_timeout_sec": 0, 00:20:28.969 "reconnect_delay_sec": 0, 00:20:28.969 "fast_io_fail_timeout_sec": 0, 00:20:28.969 "disable_auto_failback": false, 00:20:28.969 "generate_uuids": false, 00:20:28.969 "transport_tos": 0, 00:20:28.969 "nvme_error_stat": false, 00:20:28.969 "rdma_srq_size": 0, 00:20:28.969 "io_path_stat": false, 00:20:28.969 "allow_accel_sequence": false, 00:20:28.969 "rdma_max_cq_size": 0, 00:20:28.969 "rdma_cm_event_timeout_ms": 0, 00:20:28.969 "dhchap_digests": [ 00:20:28.969 "sha256", 00:20:28.969 "sha384", 00:20:28.969 "sha512" 00:20:28.969 ], 00:20:28.969 "dhchap_dhgroups": [ 00:20:28.969 "null", 00:20:28.969 "ffdhe2048", 00:20:28.969 "ffdhe3072", 00:20:28.969 "ffdhe4096", 00:20:28.969 "ffdhe6144", 00:20:28.969 "ffdhe8192" 00:20:28.969 ] 00:20:28.969 } 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "method": "bdev_nvme_set_hotplug", 00:20:28.969 "params": { 00:20:28.969 "period_us": 100000, 00:20:28.969 "enable": false 00:20:28.969 } 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "method": "bdev_malloc_create", 00:20:28.969 "params": { 00:20:28.969 "name": "malloc0", 00:20:28.969 "num_blocks": 8192, 00:20:28.969 "block_size": 4096, 00:20:28.969 "physical_block_size": 4096, 00:20:28.969 "uuid": "bd20d004-bd2d-4a2c-91d3-6910f0164dbf", 00:20:28.969 "optimal_io_boundary": 0 00:20:28.969 } 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "method": "bdev_wait_for_examine" 00:20:28.969 } 00:20:28.969 ] 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "subsystem": "nbd", 00:20:28.969 "config": [] 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "subsystem": "scheduler", 00:20:28.969 "config": [ 00:20:28.969 { 00:20:28.969 "method": "framework_set_scheduler", 00:20:28.969 "params": { 00:20:28.969 "name": "static" 00:20:28.969 } 00:20:28.969 } 00:20:28.969 ] 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "subsystem": "nvmf", 00:20:28.969 "config": [ 00:20:28.969 { 00:20:28.969 "method": "nvmf_set_config", 00:20:28.969 "params": { 00:20:28.969 "discovery_filter": "match_any", 00:20:28.969 "admin_cmd_passthru": { 00:20:28.969 "identify_ctrlr": false 00:20:28.969 } 00:20:28.969 } 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "method": "nvmf_set_max_subsystems", 00:20:28.969 "params": { 00:20:28.969 "max_subsystems": 1024 00:20:28.969 } 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "method": "nvmf_set_crdt", 00:20:28.969 "params": { 00:20:28.969 "crdt1": 0, 00:20:28.969 "crdt2": 0, 00:20:28.969 "crdt3": 0 00:20:28.969 } 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "method": "nvmf_create_transport", 00:20:28.969 "params": { 00:20:28.969 "trtype": "TCP", 00:20:28.969 "max_queue_depth": 128, 00:20:28.969 "max_io_qpairs_per_ctrlr": 127, 00:20:28.969 "in_capsule_data_size": 4096, 00:20:28.969 "max_io_size": 131072, 00:20:28.969 "io_unit_size": 131072, 00:20:28.969 "max_aq_depth": 128, 00:20:28.969 "num_shared_buffers": 511, 00:20:28.969 "buf_cache_size": 4294967295, 00:20:28.969 "dif_insert_or_strip": false, 00:20:28.969 "zcopy": false, 00:20:28.969 "c2h_success": false, 00:20:28.969 "sock_priority": 0, 00:20:28.969 "abort_timeout_sec": 1, 00:20:28.969 "ack_timeout": 0, 00:20:28.969 "data_wr_pool_size": 0 00:20:28.969 } 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "method": "nvmf_create_subsystem", 00:20:28.969 "params": { 00:20:28.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.969 "allow_any_host": false, 00:20:28.969 "serial_number": "00000000000000000000", 00:20:28.969 "model_number": "SPDK bdev Controller", 00:20:28.969 "max_namespaces": 32, 00:20:28.969 "min_cntlid": 1, 00:20:28.969 "max_cntlid": 65519, 00:20:28.969 "ana_reporting": false 00:20:28.969 } 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "method": "nvmf_subsystem_add_host", 00:20:28.969 "params": { 00:20:28.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.969 "host": "nqn.2016-06.io.spdk:host1", 00:20:28.969 "psk": "key0" 00:20:28.969 } 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "method": "nvmf_subsystem_add_ns", 00:20:28.969 "params": { 00:20:28.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.969 "namespace": { 00:20:28.969 "nsid": 1, 00:20:28.969 "bdev_name": "malloc0", 00:20:28.969 "nguid": "BD20D004BD2D4A2C91D36910F0164DBF", 00:20:28.969 "uuid": "bd20d004-bd2d-4a2c-91d3-6910f0164dbf", 00:20:28.969 "no_auto_visible": false 00:20:28.969 } 00:20:28.969 } 00:20:28.969 }, 00:20:28.969 { 00:20:28.969 "method": "nvmf_subsystem_add_listener", 00:20:28.969 "params": { 00:20:28.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.969 "listen_address": { 00:20:28.969 "trtype": "TCP", 00:20:28.969 "adrfam": "IPv4", 00:20:28.969 "traddr": "10.0.0.2", 00:20:28.969 "trsvcid": "4420" 00:20:28.969 }, 00:20:28.969 "secure_channel": true 00:20:28.969 } 00:20:28.969 } 00:20:28.969 ] 00:20:28.969 } 00:20:28.969 ] 00:20:28.969 }' 00:20:28.969 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.969 17:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1502552 00:20:28.969 17:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1502552 00:20:28.969 17:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:28.969 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1502552 ']' 00:20:28.969 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.969 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:28.969 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.969 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:28.969 17:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.969 [2024-05-15 17:05:07.800740] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:28.969 [2024-05-15 17:05:07.800794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.231 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.231 [2024-05-15 17:05:07.864420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.231 [2024-05-15 17:05:07.927649] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.231 [2024-05-15 17:05:07.927683] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.231 [2024-05-15 17:05:07.927694] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.231 [2024-05-15 17:05:07.927700] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.231 [2024-05-15 17:05:07.927706] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.231 [2024-05-15 17:05:07.927756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.491 [2024-05-15 17:05:08.117070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.491 [2024-05-15 17:05:08.149055] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:29.491 [2024-05-15 17:05:08.149102] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:29.491 [2024-05-15 17:05:08.157846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.753 17:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:29.753 17:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:29.753 17:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.753 17:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.753 17:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.014 17:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.014 17:05:08 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1502582 00:20:30.014 17:05:08 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1502582 /var/tmp/bdevperf.sock 00:20:30.014 17:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1502582 ']' 00:20:30.014 17:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.014 17:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:30.014 17:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.014 17:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:30.014 17:05:08 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:30.014 17:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.014 17:05:08 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:30.014 "subsystems": [ 00:20:30.014 { 00:20:30.014 "subsystem": "keyring", 00:20:30.014 "config": [ 00:20:30.014 { 00:20:30.014 "method": "keyring_file_add_key", 00:20:30.014 "params": { 00:20:30.014 "name": "key0", 00:20:30.014 "path": "/tmp/tmp.eTUxjz2W60" 00:20:30.014 } 00:20:30.014 } 00:20:30.014 ] 00:20:30.014 }, 00:20:30.014 { 00:20:30.014 "subsystem": "iobuf", 00:20:30.014 "config": [ 00:20:30.014 { 00:20:30.014 "method": "iobuf_set_options", 00:20:30.014 "params": { 00:20:30.014 "small_pool_count": 8192, 00:20:30.014 "large_pool_count": 1024, 00:20:30.014 "small_bufsize": 8192, 00:20:30.014 "large_bufsize": 135168 00:20:30.014 } 00:20:30.014 } 00:20:30.014 ] 00:20:30.014 }, 00:20:30.014 { 00:20:30.014 "subsystem": "sock", 00:20:30.014 "config": [ 00:20:30.014 { 00:20:30.014 "method": "sock_impl_set_options", 00:20:30.014 "params": { 00:20:30.014 "impl_name": "posix", 00:20:30.014 "recv_buf_size": 2097152, 00:20:30.014 "send_buf_size": 2097152, 00:20:30.014 "enable_recv_pipe": true, 00:20:30.014 "enable_quickack": false, 00:20:30.014 "enable_placement_id": 0, 00:20:30.014 "enable_zerocopy_send_server": true, 00:20:30.014 "enable_zerocopy_send_client": false, 00:20:30.014 "zerocopy_threshold": 0, 00:20:30.014 "tls_version": 0, 00:20:30.014 "enable_ktls": false 00:20:30.014 } 00:20:30.014 }, 00:20:30.014 { 00:20:30.014 "method": "sock_impl_set_options", 00:20:30.014 "params": { 00:20:30.014 "impl_name": "ssl", 00:20:30.014 "recv_buf_size": 4096, 00:20:30.014 "send_buf_size": 4096, 00:20:30.014 "enable_recv_pipe": true, 00:20:30.014 "enable_quickack": false, 00:20:30.014 "enable_placement_id": 0, 00:20:30.014 "enable_zerocopy_send_server": true, 00:20:30.014 "enable_zerocopy_send_client": false, 00:20:30.014 "zerocopy_threshold": 0, 00:20:30.014 "tls_version": 0, 00:20:30.014 "enable_ktls": false 00:20:30.014 } 00:20:30.014 } 00:20:30.014 ] 00:20:30.014 }, 00:20:30.014 { 00:20:30.014 "subsystem": "vmd", 00:20:30.014 "config": [] 00:20:30.014 }, 00:20:30.014 { 00:20:30.014 "subsystem": "accel", 00:20:30.014 "config": [ 00:20:30.014 { 00:20:30.014 "method": "accel_set_options", 00:20:30.014 "params": { 00:20:30.014 "small_cache_size": 128, 00:20:30.014 "large_cache_size": 16, 00:20:30.014 "task_count": 2048, 00:20:30.014 "sequence_count": 2048, 00:20:30.014 "buf_count": 2048 00:20:30.014 } 00:20:30.014 } 00:20:30.014 ] 00:20:30.014 }, 00:20:30.014 { 00:20:30.014 "subsystem": "bdev", 00:20:30.014 "config": [ 00:20:30.014 { 00:20:30.014 "method": "bdev_set_options", 00:20:30.014 "params": { 00:20:30.014 "bdev_io_pool_size": 65535, 00:20:30.014 "bdev_io_cache_size": 256, 00:20:30.014 "bdev_auto_examine": true, 00:20:30.014 "iobuf_small_cache_size": 128, 00:20:30.014 "iobuf_large_cache_size": 16 00:20:30.014 } 00:20:30.014 }, 00:20:30.014 { 00:20:30.014 "method": "bdev_raid_set_options", 00:20:30.014 "params": { 00:20:30.014 "process_window_size_kb": 1024 00:20:30.014 } 00:20:30.014 }, 00:20:30.014 { 00:20:30.014 "method": "bdev_iscsi_set_options", 00:20:30.014 "params": { 00:20:30.014 "timeout_sec": 30 00:20:30.014 } 00:20:30.014 }, 00:20:30.014 { 00:20:30.014 "method": "bdev_nvme_set_options", 00:20:30.014 "params": { 00:20:30.014 "action_on_timeout": "none", 00:20:30.014 "timeout_us": 0, 00:20:30.014 "timeout_admin_us": 0, 00:20:30.014 "keep_alive_timeout_ms": 10000, 00:20:30.014 "arbitration_burst": 0, 00:20:30.014 "low_priority_weight": 0, 00:20:30.014 "medium_priority_weight": 0, 00:20:30.014 "high_priority_weight": 0, 00:20:30.015 "nvme_adminq_poll_period_us": 10000, 00:20:30.015 "nvme_ioq_poll_period_us": 0, 00:20:30.015 "io_queue_requests": 512, 00:20:30.015 "delay_cmd_submit": true, 00:20:30.015 "transport_retry_count": 4, 00:20:30.015 "bdev_retry_count": 3, 00:20:30.015 "transport_ack_timeout": 0, 00:20:30.015 "ctrlr_loss_timeout_sec": 0, 00:20:30.015 "reconnect_delay_sec": 0, 00:20:30.015 "fast_io_fail_timeout_sec": 0, 00:20:30.015 "disable_auto_failback": false, 00:20:30.015 "generate_uuids": false, 00:20:30.015 "transport_tos": 0, 00:20:30.015 "nvme_error_stat": false, 00:20:30.015 "rdma_srq_size": 0, 00:20:30.015 "io_path_stat": false, 00:20:30.015 "allow_accel_sequence": false, 00:20:30.015 "rdma_max_cq_size": 0, 00:20:30.015 "rdma_cm_event_timeout_ms": 0, 00:20:30.015 "dhchap_digests": [ 00:20:30.015 "sha256", 00:20:30.015 "sha384", 00:20:30.015 "sha512" 00:20:30.015 ], 00:20:30.015 "dhchap_dhgroups": [ 00:20:30.015 "null", 00:20:30.015 "ffdhe2048", 00:20:30.015 "ffdhe3072", 00:20:30.015 "ffdhe4096", 00:20:30.015 "ffdhe6144", 00:20:30.015 "ffdhe8192" 00:20:30.015 ] 00:20:30.015 } 00:20:30.015 }, 00:20:30.015 { 00:20:30.015 "method": "bdev_nvme_attach_controller", 00:20:30.015 "params": { 00:20:30.015 "name": "nvme0", 00:20:30.015 "trtype": "TCP", 00:20:30.015 "adrfam": "IPv4", 00:20:30.015 "traddr": "10.0.0.2", 00:20:30.015 "trsvcid": "4420", 00:20:30.015 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.015 "prchk_reftag": false, 00:20:30.015 "prchk_guard": false, 00:20:30.015 "ctrlr_loss_timeout_sec": 0, 00:20:30.015 "reconnect_delay_sec": 0, 00:20:30.015 "fast_io_fail_timeout_sec": 0, 00:20:30.015 "psk": "key0", 00:20:30.015 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.015 "hdgst": false, 00:20:30.015 "ddgst": false 00:20:30.015 } 00:20:30.015 }, 00:20:30.015 { 00:20:30.015 "method": "bdev_nvme_set_hotplug", 00:20:30.015 "params": { 00:20:30.015 "period_us": 100000, 00:20:30.015 "enable": false 00:20:30.015 } 00:20:30.015 }, 00:20:30.015 { 00:20:30.015 "method": "bdev_enable_histogram", 00:20:30.015 "params": { 00:20:30.015 "name": "nvme0n1", 00:20:30.015 "enable": true 00:20:30.015 } 00:20:30.015 }, 00:20:30.015 { 00:20:30.015 "method": "bdev_wait_for_examine" 00:20:30.015 } 00:20:30.015 ] 00:20:30.015 }, 00:20:30.015 { 00:20:30.015 "subsystem": "nbd", 00:20:30.015 "config": [] 00:20:30.015 } 00:20:30.015 ] 00:20:30.015 }' 00:20:30.015 [2024-05-15 17:05:08.660364] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:30.015 [2024-05-15 17:05:08.660416] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502582 ] 00:20:30.015 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.015 [2024-05-15 17:05:08.736137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.015 [2024-05-15 17:05:08.789727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.276 [2024-05-15 17:05:08.915559] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.847 17:05:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:30.847 17:05:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:30.847 17:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:30.847 17:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:30.847 17:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.847 17:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:30.847 Running I/O for 1 seconds... 00:20:32.233 00:20:32.233 Latency(us) 00:20:32.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.233 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:32.233 Verification LBA range: start 0x0 length 0x2000 00:20:32.233 nvme0n1 : 1.02 4042.76 15.79 0.00 0.00 31385.00 5816.32 76021.76 00:20:32.233 =================================================================================================================== 00:20:32.233 Total : 4042.76 15.79 0.00 0.00 31385.00 5816.32 76021.76 00:20:32.233 0 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:32.233 nvmf_trace.0 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1502582 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1502582 ']' 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1502582 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1502582 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1502582' 00:20:32.233 killing process with pid 1502582 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1502582 00:20:32.233 Received shutdown signal, test time was about 1.000000 seconds 00:20:32.233 00:20:32.233 Latency(us) 00:20:32.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.233 =================================================================================================================== 00:20:32.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1502582 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.233 17:05:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.233 rmmod nvme_tcp 00:20:32.233 rmmod nvme_fabrics 00:20:32.233 rmmod nvme_keyring 00:20:32.233 17:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.233 17:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:32.233 17:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:32.233 17:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1502552 ']' 00:20:32.233 17:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1502552 00:20:32.233 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1502552 ']' 00:20:32.233 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1502552 00:20:32.233 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:32.233 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:32.233 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1502552 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1502552' 00:20:32.495 killing process with pid 1502552 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1502552 00:20:32.495 [2024-05-15 17:05:11.098747] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1502552 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.495 17:05:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.043 17:05:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:35.043 17:05:13 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.P75evqZIjs /tmp/tmp.2lTJQuUIJX /tmp/tmp.eTUxjz2W60 00:20:35.043 00:20:35.043 real 1m23.502s 00:20:35.043 user 2m11.032s 00:20:35.043 sys 0m25.329s 00:20:35.043 17:05:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:35.043 17:05:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.043 ************************************ 00:20:35.043 END TEST nvmf_tls 00:20:35.043 ************************************ 00:20:35.043 17:05:13 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:35.043 17:05:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:35.043 17:05:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:35.043 17:05:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:35.043 ************************************ 00:20:35.043 START TEST nvmf_fips 00:20:35.043 ************************************ 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:35.044 * Looking for test storage... 00:20:35.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:35.044 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:35.045 Error setting digest 00:20:35.045 00B2E677277F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:35.045 00B2E677277F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:35.045 17:05:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:41.637 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:41.637 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:41.637 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.637 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:41.638 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:41.638 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:41.898 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:41.898 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:41.898 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:41.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:20:41.899 00:20:41.899 --- 10.0.0.2 ping statistics --- 00:20:41.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.899 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:41.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:20:41.899 00:20:41.899 --- 10.0.0.1 ping statistics --- 00:20:41.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.899 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:41.899 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:42.159 17:05:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1507243 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1507243 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1507243 ']' 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:42.160 17:05:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:42.160 [2024-05-15 17:05:20.823176] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:42.160 [2024-05-15 17:05:20.823227] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.160 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.160 [2024-05-15 17:05:20.903110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.160 [2024-05-15 17:05:20.966891] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.160 [2024-05-15 17:05:20.966930] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.160 [2024-05-15 17:05:20.966938] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.160 [2024-05-15 17:05:20.966944] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.160 [2024-05-15 17:05:20.966950] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.160 [2024-05-15 17:05:20.966975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:43.104 [2024-05-15 17:05:21.815801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.104 [2024-05-15 17:05:21.831777] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:43.104 [2024-05-15 17:05:21.831837] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.104 [2024-05-15 17:05:21.832098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.104 [2024-05-15 17:05:21.861298] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:43.104 malloc0 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.104 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1507589 00:20:43.105 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1507589 /var/tmp/bdevperf.sock 00:20:43.105 17:05:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.105 17:05:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1507589 ']' 00:20:43.105 17:05:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.105 17:05:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:43.105 17:05:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.105 17:05:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:43.105 17:05:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:43.366 [2024-05-15 17:05:21.954173] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:20:43.366 [2024-05-15 17:05:21.954249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507589 ] 00:20:43.366 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.366 [2024-05-15 17:05:22.011126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.366 [2024-05-15 17:05:22.074504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.938 17:05:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:43.938 17:05:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:20:43.938 17:05:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:44.198 [2024-05-15 17:05:22.846761] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.198 [2024-05-15 17:05:22.846824] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:44.198 TLSTESTn1 00:20:44.198 17:05:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.198 Running I/O for 10 seconds... 00:20:54.307 00:20:54.307 Latency(us) 00:20:54.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.307 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:54.307 Verification LBA range: start 0x0 length 0x2000 00:20:54.307 TLSTESTn1 : 10.02 5857.36 22.88 0.00 0.00 21819.53 5707.09 72963.41 00:20:54.307 =================================================================================================================== 00:20:54.307 Total : 5857.36 22.88 0.00 0.00 21819.53 5707.09 72963.41 00:20:54.307 0 00:20:54.307 17:05:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:54.307 17:05:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:54.307 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:20:54.307 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:20:54.307 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:20:54.307 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:54.307 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:20:54.307 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:20:54.307 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:20:54.307 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:54.307 nvmf_trace.0 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1507589 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1507589 ']' 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1507589 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1507589 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1507589' 00:20:54.567 killing process with pid 1507589 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1507589 00:20:54.567 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.567 00:20:54.567 Latency(us) 00:20:54.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.567 =================================================================================================================== 00:20:54.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.567 [2024-05-15 17:05:33.223196] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1507589 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:54.567 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:54.568 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:54.568 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:54.568 rmmod nvme_tcp 00:20:54.568 rmmod nvme_fabrics 00:20:54.568 rmmod nvme_keyring 00:20:54.568 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:54.828 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:54.828 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:54.828 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1507243 ']' 00:20:54.828 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1507243 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1507243 ']' 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1507243 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1507243 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1507243' 00:20:54.829 killing process with pid 1507243 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1507243 00:20:54.829 [2024-05-15 17:05:33.461230] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:54.829 [2024-05-15 17:05:33.461262] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1507243 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.829 17:05:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.373 17:05:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:57.373 17:05:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:57.373 00:20:57.373 real 0m22.309s 00:20:57.373 user 0m24.330s 00:20:57.373 sys 0m8.664s 00:20:57.373 17:05:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:57.373 17:05:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:57.373 ************************************ 00:20:57.373 END TEST nvmf_fips 00:20:57.373 ************************************ 00:20:57.373 17:05:35 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:57.373 17:05:35 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:57.373 17:05:35 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:57.373 17:05:35 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:57.373 17:05:35 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:57.373 17:05:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:03.962 17:05:42 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.962 17:05:42 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:03.962 17:05:42 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:03.962 17:05:42 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:03.962 17:05:42 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:03.962 17:05:42 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:03.962 17:05:42 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:03.963 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:03.963 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:03.963 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:03.963 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:03.963 17:05:42 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:03.963 17:05:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:03.963 17:05:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:03.963 17:05:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:03.963 ************************************ 00:21:03.963 START TEST nvmf_perf_adq 00:21:03.963 ************************************ 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:03.963 * Looking for test storage... 00:21:03.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:03.963 17:05:42 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.964 17:05:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:10.552 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:10.552 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:10.552 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:10.552 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:10.552 17:05:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:12.466 17:05:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:14.378 17:05:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:19.659 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:19.659 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:19.659 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:19.659 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:19.659 17:05:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:19.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.819 ms 00:21:19.659 00:21:19.659 --- 10.0.0.2 ping statistics --- 00:21:19.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.659 rtt min/avg/max/mdev = 0.819/0.819/0.819/0.000 ms 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:21:19.659 00:21:19.659 --- 10.0.0.1 ping statistics --- 00:21:19.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.659 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1519094 00:21:19.659 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1519094 00:21:19.660 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1519094 ']' 00:21:19.660 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:19.660 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.660 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:19.660 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.660 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:19.660 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.660 [2024-05-15 17:05:58.176358] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:21:19.660 [2024-05-15 17:05:58.176424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.660 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.660 [2024-05-15 17:05:58.251479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.660 [2024-05-15 17:05:58.327934] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.660 [2024-05-15 17:05:58.327971] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.660 [2024-05-15 17:05:58.327982] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.660 [2024-05-15 17:05:58.327989] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.660 [2024-05-15 17:05:58.327995] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.660 [2024-05-15 17:05:58.328136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.660 [2024-05-15 17:05:58.328254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.660 [2024-05-15 17:05:58.328412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.660 [2024-05-15 17:05:58.328413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.231 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:20.231 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:21:20.231 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.231 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.231 17:05:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.231 17:05:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.231 17:05:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:20.231 17:05:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:20.231 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.231 17:05:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:20.231 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.231 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.231 17:05:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:20.231 17:05:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:20.231 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.231 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.231 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.231 17:05:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:20.231 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.231 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.492 [2024-05-15 17:05:59.145515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.492 Malloc1 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:20.492 [2024-05-15 17:05:59.204666] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:20.492 [2024-05-15 17:05:59.204913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1519403 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:20.492 17:05:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:20.492 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.402 17:06:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:22.402 17:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.402 17:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.661 17:06:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.661 17:06:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:22.661 "tick_rate": 2400000000, 00:21:22.661 "poll_groups": [ 00:21:22.661 { 00:21:22.661 "name": "nvmf_tgt_poll_group_000", 00:21:22.661 "admin_qpairs": 1, 00:21:22.661 "io_qpairs": 1, 00:21:22.661 "current_admin_qpairs": 1, 00:21:22.661 "current_io_qpairs": 1, 00:21:22.661 "pending_bdev_io": 0, 00:21:22.661 "completed_nvme_io": 18265, 00:21:22.661 "transports": [ 00:21:22.661 { 00:21:22.661 "trtype": "TCP" 00:21:22.661 } 00:21:22.661 ] 00:21:22.661 }, 00:21:22.661 { 00:21:22.661 "name": "nvmf_tgt_poll_group_001", 00:21:22.661 "admin_qpairs": 0, 00:21:22.661 "io_qpairs": 1, 00:21:22.661 "current_admin_qpairs": 0, 00:21:22.661 "current_io_qpairs": 1, 00:21:22.661 "pending_bdev_io": 0, 00:21:22.661 "completed_nvme_io": 26644, 00:21:22.661 "transports": [ 00:21:22.661 { 00:21:22.661 "trtype": "TCP" 00:21:22.661 } 00:21:22.661 ] 00:21:22.661 }, 00:21:22.662 { 00:21:22.662 "name": "nvmf_tgt_poll_group_002", 00:21:22.662 "admin_qpairs": 0, 00:21:22.662 "io_qpairs": 1, 00:21:22.662 "current_admin_qpairs": 0, 00:21:22.662 "current_io_qpairs": 1, 00:21:22.662 "pending_bdev_io": 0, 00:21:22.662 "completed_nvme_io": 19643, 00:21:22.662 "transports": [ 00:21:22.662 { 00:21:22.662 "trtype": "TCP" 00:21:22.662 } 00:21:22.662 ] 00:21:22.662 }, 00:21:22.662 { 00:21:22.662 "name": "nvmf_tgt_poll_group_003", 00:21:22.662 "admin_qpairs": 0, 00:21:22.662 "io_qpairs": 1, 00:21:22.662 "current_admin_qpairs": 0, 00:21:22.662 "current_io_qpairs": 1, 00:21:22.662 "pending_bdev_io": 0, 00:21:22.662 "completed_nvme_io": 19068, 00:21:22.662 "transports": [ 00:21:22.662 { 00:21:22.662 "trtype": "TCP" 00:21:22.662 } 00:21:22.662 ] 00:21:22.662 } 00:21:22.662 ] 00:21:22.662 }' 00:21:22.662 17:06:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:22.662 17:06:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:22.662 17:06:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:22.662 17:06:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:22.662 17:06:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1519403 00:21:30.790 Initializing NVMe Controllers 00:21:30.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:30.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:30.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:30.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:30.790 Initialization complete. Launching workers. 00:21:30.790 ======================================================== 00:21:30.790 Latency(us) 00:21:30.790 Device Information : IOPS MiB/s Average min max 00:21:30.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11080.64 43.28 5787.62 1197.54 43562.27 00:21:30.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14798.99 57.81 4324.29 1238.19 9382.44 00:21:30.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13724.30 53.61 4663.03 1137.69 11271.24 00:21:30.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12880.11 50.31 4968.27 931.37 9782.52 00:21:30.790 ======================================================== 00:21:30.790 Total : 52484.04 205.02 4879.85 931.37 43562.27 00:21:30.790 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.790 rmmod nvme_tcp 00:21:30.790 rmmod nvme_fabrics 00:21:30.790 rmmod nvme_keyring 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1519094 ']' 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1519094 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1519094 ']' 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1519094 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:30.790 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1519094 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1519094' 00:21:31.050 killing process with pid 1519094 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1519094 00:21:31.050 [2024-05-15 17:06:09.642261] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1519094 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.050 17:06:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.593 17:06:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:33.593 17:06:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:33.593 17:06:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:35.058 17:06:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:36.976 17:06:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:42.321 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:42.321 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:42.321 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:42.321 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.321 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:42.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:21:42.322 00:21:42.322 --- 10.0.0.2 ping statistics --- 00:21:42.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.322 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:21:42.322 00:21:42.322 --- 10.0.0.1 ping statistics --- 00:21:42.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.322 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:42.322 net.core.busy_poll = 1 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:42.322 net.core.busy_read = 1 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:42.322 17:06:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1524538 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1524538 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1524538 ']' 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:42.322 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.322 [2024-05-15 17:06:21.061578] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:21:42.322 [2024-05-15 17:06:21.061641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.322 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.322 [2024-05-15 17:06:21.130238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.596 [2024-05-15 17:06:21.197054] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.596 [2024-05-15 17:06:21.197089] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.596 [2024-05-15 17:06:21.197096] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.596 [2024-05-15 17:06:21.197103] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.596 [2024-05-15 17:06:21.197108] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.596 [2024-05-15 17:06:21.197247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.596 [2024-05-15 17:06:21.197361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.596 [2024-05-15 17:06:21.197481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.596 [2024-05-15 17:06:21.197482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.169 17:06:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.429 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.429 17:06:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:43.429 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.429 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.429 [2024-05-15 17:06:22.023818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.429 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.430 Malloc1 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.430 [2024-05-15 17:06:22.083007] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:43.430 [2024-05-15 17:06:22.083253] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1524735 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:43.430 17:06:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:43.430 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.341 17:06:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:45.341 17:06:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.341 17:06:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:45.341 17:06:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.341 17:06:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:45.341 "tick_rate": 2400000000, 00:21:45.341 "poll_groups": [ 00:21:45.341 { 00:21:45.341 "name": "nvmf_tgt_poll_group_000", 00:21:45.341 "admin_qpairs": 1, 00:21:45.341 "io_qpairs": 2, 00:21:45.341 "current_admin_qpairs": 1, 00:21:45.341 "current_io_qpairs": 2, 00:21:45.341 "pending_bdev_io": 0, 00:21:45.341 "completed_nvme_io": 27582, 00:21:45.341 "transports": [ 00:21:45.341 { 00:21:45.341 "trtype": "TCP" 00:21:45.341 } 00:21:45.341 ] 00:21:45.341 }, 00:21:45.341 { 00:21:45.341 "name": "nvmf_tgt_poll_group_001", 00:21:45.341 "admin_qpairs": 0, 00:21:45.341 "io_qpairs": 2, 00:21:45.341 "current_admin_qpairs": 0, 00:21:45.341 "current_io_qpairs": 2, 00:21:45.341 "pending_bdev_io": 0, 00:21:45.341 "completed_nvme_io": 41851, 00:21:45.341 "transports": [ 00:21:45.341 { 00:21:45.341 "trtype": "TCP" 00:21:45.341 } 00:21:45.341 ] 00:21:45.341 }, 00:21:45.341 { 00:21:45.341 "name": "nvmf_tgt_poll_group_002", 00:21:45.341 "admin_qpairs": 0, 00:21:45.341 "io_qpairs": 0, 00:21:45.341 "current_admin_qpairs": 0, 00:21:45.341 "current_io_qpairs": 0, 00:21:45.341 "pending_bdev_io": 0, 00:21:45.341 "completed_nvme_io": 0, 00:21:45.341 "transports": [ 00:21:45.341 { 00:21:45.341 "trtype": "TCP" 00:21:45.341 } 00:21:45.341 ] 00:21:45.341 }, 00:21:45.341 { 00:21:45.341 "name": "nvmf_tgt_poll_group_003", 00:21:45.341 "admin_qpairs": 0, 00:21:45.341 "io_qpairs": 0, 00:21:45.341 "current_admin_qpairs": 0, 00:21:45.341 "current_io_qpairs": 0, 00:21:45.341 "pending_bdev_io": 0, 00:21:45.341 "completed_nvme_io": 0, 00:21:45.341 "transports": [ 00:21:45.341 { 00:21:45.341 "trtype": "TCP" 00:21:45.341 } 00:21:45.341 ] 00:21:45.341 } 00:21:45.341 ] 00:21:45.341 }' 00:21:45.341 17:06:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:45.341 17:06:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:45.341 17:06:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:45.341 17:06:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:45.341 17:06:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1524735 00:21:53.472 Initializing NVMe Controllers 00:21:53.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:53.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:53.472 Initialization complete. Launching workers. 00:21:53.472 ======================================================== 00:21:53.472 Latency(us) 00:21:53.472 Device Information : IOPS MiB/s Average min max 00:21:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11809.10 46.13 5419.47 1201.24 50246.95 00:21:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9562.70 37.35 6696.38 1118.01 50945.36 00:21:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10572.80 41.30 6052.48 1282.86 53051.91 00:21:53.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8986.20 35.10 7123.84 1271.74 51067.54 00:21:53.472 ======================================================== 00:21:53.472 Total : 40930.80 159.89 6255.49 1118.01 53051.91 00:21:53.472 00:21:53.472 17:06:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:53.472 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:53.472 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:53.472 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.472 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:53.472 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.472 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.472 rmmod nvme_tcp 00:21:53.472 rmmod nvme_fabrics 00:21:53.472 rmmod nvme_keyring 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1524538 ']' 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1524538 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1524538 ']' 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1524538 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1524538 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1524538' 00:21:53.733 killing process with pid 1524538 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1524538 00:21:53.733 [2024-05-15 17:06:32.384347] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1524538 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.733 17:06:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.032 17:06:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.032 17:06:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:57.032 00:21:57.032 real 0m53.216s 00:21:57.032 user 2m50.430s 00:21:57.032 sys 0m10.302s 00:21:57.032 17:06:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:57.032 17:06:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:57.032 ************************************ 00:21:57.032 END TEST nvmf_perf_adq 00:21:57.032 ************************************ 00:21:57.032 17:06:35 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:57.032 17:06:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:57.032 17:06:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:57.032 17:06:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:57.032 ************************************ 00:21:57.032 START TEST nvmf_shutdown 00:21:57.032 ************************************ 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:57.032 * Looking for test storage... 00:21:57.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:57.032 ************************************ 00:21:57.032 START TEST nvmf_shutdown_tc1 00:21:57.032 ************************************ 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.032 17:06:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.176 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:05.177 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:05.177 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:05.177 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:05.177 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:22:05.177 00:22:05.177 --- 10.0.0.2 ping statistics --- 00:22:05.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.177 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:22:05.177 17:06:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:22:05.177 00:22:05.177 --- 10.0.0.1 ping statistics --- 00:22:05.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.177 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1531119 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1531119 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1531119 ']' 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.177 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.178 [2024-05-15 17:06:43.108744] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:22:05.178 [2024-05-15 17:06:43.108808] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.178 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.178 [2024-05-15 17:06:43.198100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.178 [2024-05-15 17:06:43.291955] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.178 [2024-05-15 17:06:43.292010] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.178 [2024-05-15 17:06:43.292018] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.178 [2024-05-15 17:06:43.292025] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.178 [2024-05-15 17:06:43.292032] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.178 [2024-05-15 17:06:43.292160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.178 [2024-05-15 17:06:43.292331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.178 [2024-05-15 17:06:43.292496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.178 [2024-05-15 17:06:43.292496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.178 [2024-05-15 17:06:43.935020] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:05.178 17:06:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:05.178 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:05.178 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.178 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.438 Malloc1 00:22:05.438 [2024-05-15 17:06:44.038281] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:05.438 [2024-05-15 17:06:44.038517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.438 Malloc2 00:22:05.438 Malloc3 00:22:05.438 Malloc4 00:22:05.438 Malloc5 00:22:05.438 Malloc6 00:22:05.438 Malloc7 00:22:05.699 Malloc8 00:22:05.699 Malloc9 00:22:05.699 Malloc10 00:22:05.699 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.699 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:05.699 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.699 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.699 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1531497 00:22:05.699 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1531497 /var/tmp/bdevperf.sock 00:22:05.699 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1531497 ']' 00:22:05.699 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.699 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:05.699 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.699 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.700 { 00:22:05.700 "params": { 00:22:05.700 "name": "Nvme$subsystem", 00:22:05.700 "trtype": "$TEST_TRANSPORT", 00:22:05.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "$NVMF_PORT", 00:22:05.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.700 "hdgst": ${hdgst:-false}, 00:22:05.700 "ddgst": ${ddgst:-false} 00:22:05.700 }, 00:22:05.700 "method": "bdev_nvme_attach_controller" 00:22:05.700 } 00:22:05.700 EOF 00:22:05.700 )") 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.700 { 00:22:05.700 "params": { 00:22:05.700 "name": "Nvme$subsystem", 00:22:05.700 "trtype": "$TEST_TRANSPORT", 00:22:05.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "$NVMF_PORT", 00:22:05.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.700 "hdgst": ${hdgst:-false}, 00:22:05.700 "ddgst": ${ddgst:-false} 00:22:05.700 }, 00:22:05.700 "method": "bdev_nvme_attach_controller" 00:22:05.700 } 00:22:05.700 EOF 00:22:05.700 )") 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.700 { 00:22:05.700 "params": { 00:22:05.700 "name": "Nvme$subsystem", 00:22:05.700 "trtype": "$TEST_TRANSPORT", 00:22:05.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "$NVMF_PORT", 00:22:05.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.700 "hdgst": ${hdgst:-false}, 00:22:05.700 "ddgst": ${ddgst:-false} 00:22:05.700 }, 00:22:05.700 "method": "bdev_nvme_attach_controller" 00:22:05.700 } 00:22:05.700 EOF 00:22:05.700 )") 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.700 { 00:22:05.700 "params": { 00:22:05.700 "name": "Nvme$subsystem", 00:22:05.700 "trtype": "$TEST_TRANSPORT", 00:22:05.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "$NVMF_PORT", 00:22:05.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.700 "hdgst": ${hdgst:-false}, 00:22:05.700 "ddgst": ${ddgst:-false} 00:22:05.700 }, 00:22:05.700 "method": "bdev_nvme_attach_controller" 00:22:05.700 } 00:22:05.700 EOF 00:22:05.700 )") 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.700 { 00:22:05.700 "params": { 00:22:05.700 "name": "Nvme$subsystem", 00:22:05.700 "trtype": "$TEST_TRANSPORT", 00:22:05.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "$NVMF_PORT", 00:22:05.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.700 "hdgst": ${hdgst:-false}, 00:22:05.700 "ddgst": ${ddgst:-false} 00:22:05.700 }, 00:22:05.700 "method": "bdev_nvme_attach_controller" 00:22:05.700 } 00:22:05.700 EOF 00:22:05.700 )") 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.700 { 00:22:05.700 "params": { 00:22:05.700 "name": "Nvme$subsystem", 00:22:05.700 "trtype": "$TEST_TRANSPORT", 00:22:05.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "$NVMF_PORT", 00:22:05.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.700 "hdgst": ${hdgst:-false}, 00:22:05.700 "ddgst": ${ddgst:-false} 00:22:05.700 }, 00:22:05.700 "method": "bdev_nvme_attach_controller" 00:22:05.700 } 00:22:05.700 EOF 00:22:05.700 )") 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.700 { 00:22:05.700 "params": { 00:22:05.700 "name": "Nvme$subsystem", 00:22:05.700 "trtype": "$TEST_TRANSPORT", 00:22:05.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "$NVMF_PORT", 00:22:05.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.700 "hdgst": ${hdgst:-false}, 00:22:05.700 "ddgst": ${ddgst:-false} 00:22:05.700 }, 00:22:05.700 "method": "bdev_nvme_attach_controller" 00:22:05.700 } 00:22:05.700 EOF 00:22:05.700 )") 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.700 [2024-05-15 17:06:44.496012] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:22:05.700 [2024-05-15 17:06:44.496067] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.700 { 00:22:05.700 "params": { 00:22:05.700 "name": "Nvme$subsystem", 00:22:05.700 "trtype": "$TEST_TRANSPORT", 00:22:05.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "$NVMF_PORT", 00:22:05.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.700 "hdgst": ${hdgst:-false}, 00:22:05.700 "ddgst": ${ddgst:-false} 00:22:05.700 }, 00:22:05.700 "method": "bdev_nvme_attach_controller" 00:22:05.700 } 00:22:05.700 EOF 00:22:05.700 )") 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.700 { 00:22:05.700 "params": { 00:22:05.700 "name": "Nvme$subsystem", 00:22:05.700 "trtype": "$TEST_TRANSPORT", 00:22:05.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "$NVMF_PORT", 00:22:05.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.700 "hdgst": ${hdgst:-false}, 00:22:05.700 "ddgst": ${ddgst:-false} 00:22:05.700 }, 00:22:05.700 "method": "bdev_nvme_attach_controller" 00:22:05.700 } 00:22:05.700 EOF 00:22:05.700 )") 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:05.700 { 00:22:05.700 "params": { 00:22:05.700 "name": "Nvme$subsystem", 00:22:05.700 "trtype": "$TEST_TRANSPORT", 00:22:05.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "$NVMF_PORT", 00:22:05.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.700 "hdgst": ${hdgst:-false}, 00:22:05.700 "ddgst": ${ddgst:-false} 00:22:05.700 }, 00:22:05.700 "method": "bdev_nvme_attach_controller" 00:22:05.700 } 00:22:05.700 EOF 00:22:05.700 )") 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:05.700 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.700 17:06:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:05.700 "params": { 00:22:05.700 "name": "Nvme1", 00:22:05.700 "trtype": "tcp", 00:22:05.700 "traddr": "10.0.0.2", 00:22:05.700 "adrfam": "ipv4", 00:22:05.700 "trsvcid": "4420", 00:22:05.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.700 "hdgst": false, 00:22:05.700 "ddgst": false 00:22:05.700 }, 00:22:05.700 "method": "bdev_nvme_attach_controller" 00:22:05.701 },{ 00:22:05.701 "params": { 00:22:05.701 "name": "Nvme2", 00:22:05.701 "trtype": "tcp", 00:22:05.701 "traddr": "10.0.0.2", 00:22:05.701 "adrfam": "ipv4", 00:22:05.701 "trsvcid": "4420", 00:22:05.701 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:05.701 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:05.701 "hdgst": false, 00:22:05.701 "ddgst": false 00:22:05.701 }, 00:22:05.701 "method": "bdev_nvme_attach_controller" 00:22:05.701 },{ 00:22:05.701 "params": { 00:22:05.701 "name": "Nvme3", 00:22:05.701 "trtype": "tcp", 00:22:05.701 "traddr": "10.0.0.2", 00:22:05.701 "adrfam": "ipv4", 00:22:05.701 "trsvcid": "4420", 00:22:05.701 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:05.701 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:05.701 "hdgst": false, 00:22:05.701 "ddgst": false 00:22:05.701 }, 00:22:05.701 "method": "bdev_nvme_attach_controller" 00:22:05.701 },{ 00:22:05.701 "params": { 00:22:05.701 "name": "Nvme4", 00:22:05.701 "trtype": "tcp", 00:22:05.701 "traddr": "10.0.0.2", 00:22:05.701 "adrfam": "ipv4", 00:22:05.701 "trsvcid": "4420", 00:22:05.701 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:05.701 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:05.701 "hdgst": false, 00:22:05.701 "ddgst": false 00:22:05.701 }, 00:22:05.701 "method": "bdev_nvme_attach_controller" 00:22:05.701 },{ 00:22:05.701 "params": { 00:22:05.701 "name": "Nvme5", 00:22:05.701 "trtype": "tcp", 00:22:05.701 "traddr": "10.0.0.2", 00:22:05.701 "adrfam": "ipv4", 00:22:05.701 "trsvcid": "4420", 00:22:05.701 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:05.701 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:05.701 "hdgst": false, 00:22:05.701 "ddgst": false 00:22:05.701 }, 00:22:05.701 "method": "bdev_nvme_attach_controller" 00:22:05.701 },{ 00:22:05.701 "params": { 00:22:05.701 "name": "Nvme6", 00:22:05.701 "trtype": "tcp", 00:22:05.701 "traddr": "10.0.0.2", 00:22:05.701 "adrfam": "ipv4", 00:22:05.701 "trsvcid": "4420", 00:22:05.701 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:05.701 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:05.701 "hdgst": false, 00:22:05.701 "ddgst": false 00:22:05.701 }, 00:22:05.701 "method": "bdev_nvme_attach_controller" 00:22:05.701 },{ 00:22:05.701 "params": { 00:22:05.701 "name": "Nvme7", 00:22:05.701 "trtype": "tcp", 00:22:05.701 "traddr": "10.0.0.2", 00:22:05.701 "adrfam": "ipv4", 00:22:05.701 "trsvcid": "4420", 00:22:05.701 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:05.701 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:05.701 "hdgst": false, 00:22:05.701 "ddgst": false 00:22:05.701 }, 00:22:05.701 "method": "bdev_nvme_attach_controller" 00:22:05.701 },{ 00:22:05.701 "params": { 00:22:05.701 "name": "Nvme8", 00:22:05.701 "trtype": "tcp", 00:22:05.701 "traddr": "10.0.0.2", 00:22:05.701 "adrfam": "ipv4", 00:22:05.701 "trsvcid": "4420", 00:22:05.701 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:05.701 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:05.701 "hdgst": false, 00:22:05.701 "ddgst": false 00:22:05.701 }, 00:22:05.701 "method": "bdev_nvme_attach_controller" 00:22:05.701 },{ 00:22:05.701 "params": { 00:22:05.701 "name": "Nvme9", 00:22:05.701 "trtype": "tcp", 00:22:05.701 "traddr": "10.0.0.2", 00:22:05.701 "adrfam": "ipv4", 00:22:05.701 "trsvcid": "4420", 00:22:05.701 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:05.701 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:05.701 "hdgst": false, 00:22:05.701 "ddgst": false 00:22:05.701 }, 00:22:05.701 "method": "bdev_nvme_attach_controller" 00:22:05.701 },{ 00:22:05.701 "params": { 00:22:05.701 "name": "Nvme10", 00:22:05.701 "trtype": "tcp", 00:22:05.701 "traddr": "10.0.0.2", 00:22:05.701 "adrfam": "ipv4", 00:22:05.701 "trsvcid": "4420", 00:22:05.701 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:05.701 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:05.701 "hdgst": false, 00:22:05.701 "ddgst": false 00:22:05.701 }, 00:22:05.701 "method": "bdev_nvme_attach_controller" 00:22:05.701 }' 00:22:05.961 [2024-05-15 17:06:44.556168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.961 [2024-05-15 17:06:44.620532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.345 17:06:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:07.345 17:06:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:22:07.346 17:06:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:07.346 17:06:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.346 17:06:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:07.346 17:06:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.346 17:06:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1531497 00:22:07.346 17:06:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:07.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1531497 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:07.346 17:06:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1531119 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.291 { 00:22:08.291 "params": { 00:22:08.291 "name": "Nvme$subsystem", 00:22:08.291 "trtype": "$TEST_TRANSPORT", 00:22:08.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.291 "adrfam": "ipv4", 00:22:08.291 "trsvcid": "$NVMF_PORT", 00:22:08.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.291 "hdgst": ${hdgst:-false}, 00:22:08.291 "ddgst": ${ddgst:-false} 00:22:08.291 }, 00:22:08.291 "method": "bdev_nvme_attach_controller" 00:22:08.291 } 00:22:08.291 EOF 00:22:08.291 )") 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.291 { 00:22:08.291 "params": { 00:22:08.291 "name": "Nvme$subsystem", 00:22:08.291 "trtype": "$TEST_TRANSPORT", 00:22:08.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.291 "adrfam": "ipv4", 00:22:08.291 "trsvcid": "$NVMF_PORT", 00:22:08.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.291 "hdgst": ${hdgst:-false}, 00:22:08.291 "ddgst": ${ddgst:-false} 00:22:08.291 }, 00:22:08.291 "method": "bdev_nvme_attach_controller" 00:22:08.291 } 00:22:08.291 EOF 00:22:08.291 )") 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.291 { 00:22:08.291 "params": { 00:22:08.291 "name": "Nvme$subsystem", 00:22:08.291 "trtype": "$TEST_TRANSPORT", 00:22:08.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.291 "adrfam": "ipv4", 00:22:08.291 "trsvcid": "$NVMF_PORT", 00:22:08.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.291 "hdgst": ${hdgst:-false}, 00:22:08.291 "ddgst": ${ddgst:-false} 00:22:08.291 }, 00:22:08.291 "method": "bdev_nvme_attach_controller" 00:22:08.291 } 00:22:08.291 EOF 00:22:08.291 )") 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.291 { 00:22:08.291 "params": { 00:22:08.291 "name": "Nvme$subsystem", 00:22:08.291 "trtype": "$TEST_TRANSPORT", 00:22:08.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.291 "adrfam": "ipv4", 00:22:08.291 "trsvcid": "$NVMF_PORT", 00:22:08.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.291 "hdgst": ${hdgst:-false}, 00:22:08.291 "ddgst": ${ddgst:-false} 00:22:08.291 }, 00:22:08.291 "method": "bdev_nvme_attach_controller" 00:22:08.291 } 00:22:08.291 EOF 00:22:08.291 )") 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.291 { 00:22:08.291 "params": { 00:22:08.291 "name": "Nvme$subsystem", 00:22:08.291 "trtype": "$TEST_TRANSPORT", 00:22:08.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.291 "adrfam": "ipv4", 00:22:08.291 "trsvcid": "$NVMF_PORT", 00:22:08.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.291 "hdgst": ${hdgst:-false}, 00:22:08.291 "ddgst": ${ddgst:-false} 00:22:08.291 }, 00:22:08.291 "method": "bdev_nvme_attach_controller" 00:22:08.291 } 00:22:08.291 EOF 00:22:08.291 )") 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.291 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.291 { 00:22:08.291 "params": { 00:22:08.291 "name": "Nvme$subsystem", 00:22:08.291 "trtype": "$TEST_TRANSPORT", 00:22:08.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.291 "adrfam": "ipv4", 00:22:08.291 "trsvcid": "$NVMF_PORT", 00:22:08.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.291 "hdgst": ${hdgst:-false}, 00:22:08.291 "ddgst": ${ddgst:-false} 00:22:08.291 }, 00:22:08.291 "method": "bdev_nvme_attach_controller" 00:22:08.291 } 00:22:08.291 EOF 00:22:08.291 )") 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:08.292 [2024-05-15 17:06:46.922575] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:22:08.292 [2024-05-15 17:06:46.922629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1531863 ] 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.292 { 00:22:08.292 "params": { 00:22:08.292 "name": "Nvme$subsystem", 00:22:08.292 "trtype": "$TEST_TRANSPORT", 00:22:08.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.292 "adrfam": "ipv4", 00:22:08.292 "trsvcid": "$NVMF_PORT", 00:22:08.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.292 "hdgst": ${hdgst:-false}, 00:22:08.292 "ddgst": ${ddgst:-false} 00:22:08.292 }, 00:22:08.292 "method": "bdev_nvme_attach_controller" 00:22:08.292 } 00:22:08.292 EOF 00:22:08.292 )") 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.292 { 00:22:08.292 "params": { 00:22:08.292 "name": "Nvme$subsystem", 00:22:08.292 "trtype": "$TEST_TRANSPORT", 00:22:08.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.292 "adrfam": "ipv4", 00:22:08.292 "trsvcid": "$NVMF_PORT", 00:22:08.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.292 "hdgst": ${hdgst:-false}, 00:22:08.292 "ddgst": ${ddgst:-false} 00:22:08.292 }, 00:22:08.292 "method": "bdev_nvme_attach_controller" 00:22:08.292 } 00:22:08.292 EOF 00:22:08.292 )") 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.292 { 00:22:08.292 "params": { 00:22:08.292 "name": "Nvme$subsystem", 00:22:08.292 "trtype": "$TEST_TRANSPORT", 00:22:08.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.292 "adrfam": "ipv4", 00:22:08.292 "trsvcid": "$NVMF_PORT", 00:22:08.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.292 "hdgst": ${hdgst:-false}, 00:22:08.292 "ddgst": ${ddgst:-false} 00:22:08.292 }, 00:22:08.292 "method": "bdev_nvme_attach_controller" 00:22:08.292 } 00:22:08.292 EOF 00:22:08.292 )") 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.292 { 00:22:08.292 "params": { 00:22:08.292 "name": "Nvme$subsystem", 00:22:08.292 "trtype": "$TEST_TRANSPORT", 00:22:08.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.292 "adrfam": "ipv4", 00:22:08.292 "trsvcid": "$NVMF_PORT", 00:22:08.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.292 "hdgst": ${hdgst:-false}, 00:22:08.292 "ddgst": ${ddgst:-false} 00:22:08.292 }, 00:22:08.292 "method": "bdev_nvme_attach_controller" 00:22:08.292 } 00:22:08.292 EOF 00:22:08.292 )") 00:22:08.292 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:08.292 17:06:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:08.292 "params": { 00:22:08.292 "name": "Nvme1", 00:22:08.292 "trtype": "tcp", 00:22:08.292 "traddr": "10.0.0.2", 00:22:08.292 "adrfam": "ipv4", 00:22:08.292 "trsvcid": "4420", 00:22:08.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.292 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:08.292 "hdgst": false, 00:22:08.292 "ddgst": false 00:22:08.292 }, 00:22:08.292 "method": "bdev_nvme_attach_controller" 00:22:08.292 },{ 00:22:08.292 "params": { 00:22:08.292 "name": "Nvme2", 00:22:08.292 "trtype": "tcp", 00:22:08.292 "traddr": "10.0.0.2", 00:22:08.292 "adrfam": "ipv4", 00:22:08.292 "trsvcid": "4420", 00:22:08.292 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:08.292 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:08.292 "hdgst": false, 00:22:08.292 "ddgst": false 00:22:08.292 }, 00:22:08.292 "method": "bdev_nvme_attach_controller" 00:22:08.292 },{ 00:22:08.292 "params": { 00:22:08.292 "name": "Nvme3", 00:22:08.292 "trtype": "tcp", 00:22:08.292 "traddr": "10.0.0.2", 00:22:08.292 "adrfam": "ipv4", 00:22:08.292 "trsvcid": "4420", 00:22:08.292 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:08.292 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:08.292 "hdgst": false, 00:22:08.292 "ddgst": false 00:22:08.292 }, 00:22:08.292 "method": "bdev_nvme_attach_controller" 00:22:08.292 },{ 00:22:08.292 "params": { 00:22:08.292 "name": "Nvme4", 00:22:08.292 "trtype": "tcp", 00:22:08.292 "traddr": "10.0.0.2", 00:22:08.292 "adrfam": "ipv4", 00:22:08.292 "trsvcid": "4420", 00:22:08.292 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:08.292 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:08.292 "hdgst": false, 00:22:08.292 "ddgst": false 00:22:08.292 }, 00:22:08.292 "method": "bdev_nvme_attach_controller" 00:22:08.292 },{ 00:22:08.293 "params": { 00:22:08.293 "name": "Nvme5", 00:22:08.293 "trtype": "tcp", 00:22:08.293 "traddr": "10.0.0.2", 00:22:08.293 "adrfam": "ipv4", 00:22:08.293 "trsvcid": "4420", 00:22:08.293 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:08.293 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:08.293 "hdgst": false, 00:22:08.293 "ddgst": false 00:22:08.293 }, 00:22:08.293 "method": "bdev_nvme_attach_controller" 00:22:08.293 },{ 00:22:08.293 "params": { 00:22:08.293 "name": "Nvme6", 00:22:08.293 "trtype": "tcp", 00:22:08.293 "traddr": "10.0.0.2", 00:22:08.293 "adrfam": "ipv4", 00:22:08.293 "trsvcid": "4420", 00:22:08.293 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:08.293 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:08.293 "hdgst": false, 00:22:08.293 "ddgst": false 00:22:08.293 }, 00:22:08.293 "method": "bdev_nvme_attach_controller" 00:22:08.293 },{ 00:22:08.293 "params": { 00:22:08.293 "name": "Nvme7", 00:22:08.293 "trtype": "tcp", 00:22:08.293 "traddr": "10.0.0.2", 00:22:08.293 "adrfam": "ipv4", 00:22:08.293 "trsvcid": "4420", 00:22:08.293 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:08.293 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:08.293 "hdgst": false, 00:22:08.293 "ddgst": false 00:22:08.293 }, 00:22:08.293 "method": "bdev_nvme_attach_controller" 00:22:08.293 },{ 00:22:08.293 "params": { 00:22:08.293 "name": "Nvme8", 00:22:08.293 "trtype": "tcp", 00:22:08.293 "traddr": "10.0.0.2", 00:22:08.293 "adrfam": "ipv4", 00:22:08.293 "trsvcid": "4420", 00:22:08.293 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:08.293 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:08.293 "hdgst": false, 00:22:08.293 "ddgst": false 00:22:08.293 }, 00:22:08.293 "method": "bdev_nvme_attach_controller" 00:22:08.293 },{ 00:22:08.293 "params": { 00:22:08.293 "name": "Nvme9", 00:22:08.293 "trtype": "tcp", 00:22:08.293 "traddr": "10.0.0.2", 00:22:08.293 "adrfam": "ipv4", 00:22:08.293 "trsvcid": "4420", 00:22:08.293 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:08.293 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:08.293 "hdgst": false, 00:22:08.293 "ddgst": false 00:22:08.293 }, 00:22:08.293 "method": "bdev_nvme_attach_controller" 00:22:08.293 },{ 00:22:08.293 "params": { 00:22:08.293 "name": "Nvme10", 00:22:08.293 "trtype": "tcp", 00:22:08.293 "traddr": "10.0.0.2", 00:22:08.293 "adrfam": "ipv4", 00:22:08.293 "trsvcid": "4420", 00:22:08.293 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:08.293 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:08.293 "hdgst": false, 00:22:08.293 "ddgst": false 00:22:08.293 }, 00:22:08.293 "method": "bdev_nvme_attach_controller" 00:22:08.293 }' 00:22:08.293 [2024-05-15 17:06:46.982880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.293 [2024-05-15 17:06:47.046991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.677 Running I/O for 1 seconds... 00:22:11.060 00:22:11.060 Latency(us) 00:22:11.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.060 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.060 Verification LBA range: start 0x0 length 0x400 00:22:11.060 Nvme1n1 : 1.16 221.06 13.82 0.00 0.00 283905.28 14636.37 265639.25 00:22:11.060 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.060 Verification LBA range: start 0x0 length 0x400 00:22:11.060 Nvme2n1 : 1.12 233.60 14.60 0.00 0.00 264264.24 5434.03 265639.25 00:22:11.060 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.060 Verification LBA range: start 0x0 length 0x400 00:22:11.060 Nvme3n1 : 1.19 269.54 16.85 0.00 0.00 225392.64 13489.49 251658.24 00:22:11.060 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.060 Verification LBA range: start 0x0 length 0x400 00:22:11.060 Nvme4n1 : 1.11 229.79 14.36 0.00 0.00 258929.49 34734.08 248162.99 00:22:11.060 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.060 Verification LBA range: start 0x0 length 0x400 00:22:11.060 Nvme5n1 : 1.14 223.95 14.00 0.00 0.00 260444.59 17257.81 242920.11 00:22:11.060 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.060 Verification LBA range: start 0x0 length 0x400 00:22:11.060 Nvme6n1 : 1.15 221.82 13.86 0.00 0.00 257780.48 17476.27 251658.24 00:22:11.060 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.060 Verification LBA range: start 0x0 length 0x400 00:22:11.060 Nvme7n1 : 1.20 266.03 16.63 0.00 0.00 211592.19 16274.77 242920.11 00:22:11.060 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.060 Verification LBA range: start 0x0 length 0x400 00:22:11.060 Nvme8n1 : 1.19 272.30 17.02 0.00 0.00 201101.46 7372.80 214084.27 00:22:11.060 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.060 Verification LBA range: start 0x0 length 0x400 00:22:11.060 Nvme9n1 : 1.19 220.14 13.76 0.00 0.00 243746.12 1706.67 272629.76 00:22:11.060 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:11.060 Verification LBA range: start 0x0 length 0x400 00:22:11.060 Nvme10n1 : 1.21 263.59 16.47 0.00 0.00 200689.92 12014.93 263891.63 00:22:11.060 =================================================================================================================== 00:22:11.060 Total : 2421.82 151.36 0.00 0.00 237966.92 1706.67 272629.76 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.060 rmmod nvme_tcp 00:22:11.060 rmmod nvme_fabrics 00:22:11.060 rmmod nvme_keyring 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1531119 ']' 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1531119 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 1531119 ']' 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 1531119 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1531119 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1531119' 00:22:11.060 killing process with pid 1531119 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 1531119 00:22:11.060 [2024-05-15 17:06:49.750036] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:11.060 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 1531119 00:22:11.321 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.321 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:11.321 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:11.321 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.321 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.321 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.321 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.321 17:06:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.232 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:13.232 00:22:13.232 real 0m16.245s 00:22:13.232 user 0m32.611s 00:22:13.232 sys 0m6.454s 00:22:13.232 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:13.232 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.232 ************************************ 00:22:13.232 END TEST nvmf_shutdown_tc1 00:22:13.232 ************************************ 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:13.503 ************************************ 00:22:13.503 START TEST nvmf_shutdown_tc2 00:22:13.503 ************************************ 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:13.503 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:13.503 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:13.503 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:13.503 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:13.503 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:13.504 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:13.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:22:13.765 00:22:13.765 --- 10.0.0.2 ping statistics --- 00:22:13.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.765 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:22:13.765 00:22:13.765 --- 10.0.0.1 ping statistics --- 00:22:13.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.765 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1533108 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1533108 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1533108 ']' 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:13.765 17:06:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:13.765 [2024-05-15 17:06:52.540953] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:22:13.765 [2024-05-15 17:06:52.541021] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.765 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.026 [2024-05-15 17:06:52.630225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.026 [2024-05-15 17:06:52.690778] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.026 [2024-05-15 17:06:52.690811] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.026 [2024-05-15 17:06:52.690816] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.026 [2024-05-15 17:06:52.690821] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.026 [2024-05-15 17:06:52.690824] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.026 [2024-05-15 17:06:52.690936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.026 [2024-05-15 17:06:52.691098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.026 [2024-05-15 17:06:52.691257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.026 [2024-05-15 17:06:52.691259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.598 [2024-05-15 17:06:53.361887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.598 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.858 Malloc1 00:22:14.858 [2024-05-15 17:06:53.460262] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:14.858 [2024-05-15 17:06:53.460478] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.858 Malloc2 00:22:14.858 Malloc3 00:22:14.858 Malloc4 00:22:14.858 Malloc5 00:22:14.858 Malloc6 00:22:14.858 Malloc7 00:22:15.119 Malloc8 00:22:15.119 Malloc9 00:22:15.119 Malloc10 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1533344 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1533344 /var/tmp/bdevperf.sock 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1533344 ']' 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.119 { 00:22:15.119 "params": { 00:22:15.119 "name": "Nvme$subsystem", 00:22:15.119 "trtype": "$TEST_TRANSPORT", 00:22:15.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.119 "adrfam": "ipv4", 00:22:15.119 "trsvcid": "$NVMF_PORT", 00:22:15.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.119 "hdgst": ${hdgst:-false}, 00:22:15.119 "ddgst": ${ddgst:-false} 00:22:15.119 }, 00:22:15.119 "method": "bdev_nvme_attach_controller" 00:22:15.119 } 00:22:15.119 EOF 00:22:15.119 )") 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.119 { 00:22:15.119 "params": { 00:22:15.119 "name": "Nvme$subsystem", 00:22:15.119 "trtype": "$TEST_TRANSPORT", 00:22:15.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.119 "adrfam": "ipv4", 00:22:15.119 "trsvcid": "$NVMF_PORT", 00:22:15.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.119 "hdgst": ${hdgst:-false}, 00:22:15.119 "ddgst": ${ddgst:-false} 00:22:15.119 }, 00:22:15.119 "method": "bdev_nvme_attach_controller" 00:22:15.119 } 00:22:15.119 EOF 00:22:15.119 )") 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.119 { 00:22:15.119 "params": { 00:22:15.119 "name": "Nvme$subsystem", 00:22:15.119 "trtype": "$TEST_TRANSPORT", 00:22:15.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.119 "adrfam": "ipv4", 00:22:15.119 "trsvcid": "$NVMF_PORT", 00:22:15.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.119 "hdgst": ${hdgst:-false}, 00:22:15.119 "ddgst": ${ddgst:-false} 00:22:15.119 }, 00:22:15.119 "method": "bdev_nvme_attach_controller" 00:22:15.119 } 00:22:15.119 EOF 00:22:15.119 )") 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.119 { 00:22:15.119 "params": { 00:22:15.119 "name": "Nvme$subsystem", 00:22:15.119 "trtype": "$TEST_TRANSPORT", 00:22:15.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.119 "adrfam": "ipv4", 00:22:15.119 "trsvcid": "$NVMF_PORT", 00:22:15.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.119 "hdgst": ${hdgst:-false}, 00:22:15.119 "ddgst": ${ddgst:-false} 00:22:15.119 }, 00:22:15.119 "method": "bdev_nvme_attach_controller" 00:22:15.119 } 00:22:15.119 EOF 00:22:15.119 )") 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.119 { 00:22:15.119 "params": { 00:22:15.119 "name": "Nvme$subsystem", 00:22:15.119 "trtype": "$TEST_TRANSPORT", 00:22:15.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.119 "adrfam": "ipv4", 00:22:15.119 "trsvcid": "$NVMF_PORT", 00:22:15.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.119 "hdgst": ${hdgst:-false}, 00:22:15.119 "ddgst": ${ddgst:-false} 00:22:15.119 }, 00:22:15.119 "method": "bdev_nvme_attach_controller" 00:22:15.119 } 00:22:15.119 EOF 00:22:15.119 )") 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.119 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.119 { 00:22:15.119 "params": { 00:22:15.120 "name": "Nvme$subsystem", 00:22:15.120 "trtype": "$TEST_TRANSPORT", 00:22:15.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "$NVMF_PORT", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.120 "hdgst": ${hdgst:-false}, 00:22:15.120 "ddgst": ${ddgst:-false} 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 } 00:22:15.120 EOF 00:22:15.120 )") 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.120 { 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme$subsystem", 00:22:15.120 "trtype": "$TEST_TRANSPORT", 00:22:15.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "$NVMF_PORT", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.120 "hdgst": ${hdgst:-false}, 00:22:15.120 "ddgst": ${ddgst:-false} 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 } 00:22:15.120 EOF 00:22:15.120 )") 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:15.120 [2024-05-15 17:06:53.904164] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:22:15.120 [2024-05-15 17:06:53.904215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533344 ] 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.120 { 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme$subsystem", 00:22:15.120 "trtype": "$TEST_TRANSPORT", 00:22:15.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "$NVMF_PORT", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.120 "hdgst": ${hdgst:-false}, 00:22:15.120 "ddgst": ${ddgst:-false} 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 } 00:22:15.120 EOF 00:22:15.120 )") 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.120 { 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme$subsystem", 00:22:15.120 "trtype": "$TEST_TRANSPORT", 00:22:15.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "$NVMF_PORT", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.120 "hdgst": ${hdgst:-false}, 00:22:15.120 "ddgst": ${ddgst:-false} 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 } 00:22:15.120 EOF 00:22:15.120 )") 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.120 { 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme$subsystem", 00:22:15.120 "trtype": "$TEST_TRANSPORT", 00:22:15.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "$NVMF_PORT", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.120 "hdgst": ${hdgst:-false}, 00:22:15.120 "ddgst": ${ddgst:-false} 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 } 00:22:15.120 EOF 00:22:15.120 )") 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:15.120 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:15.120 17:06:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme1", 00:22:15.120 "trtype": "tcp", 00:22:15.120 "traddr": "10.0.0.2", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "4420", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.120 "hdgst": false, 00:22:15.120 "ddgst": false 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 },{ 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme2", 00:22:15.120 "trtype": "tcp", 00:22:15.120 "traddr": "10.0.0.2", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "4420", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:15.120 "hdgst": false, 00:22:15.120 "ddgst": false 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 },{ 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme3", 00:22:15.120 "trtype": "tcp", 00:22:15.120 "traddr": "10.0.0.2", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "4420", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:15.120 "hdgst": false, 00:22:15.120 "ddgst": false 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 },{ 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme4", 00:22:15.120 "trtype": "tcp", 00:22:15.120 "traddr": "10.0.0.2", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "4420", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:15.120 "hdgst": false, 00:22:15.120 "ddgst": false 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 },{ 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme5", 00:22:15.120 "trtype": "tcp", 00:22:15.120 "traddr": "10.0.0.2", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "4420", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:15.120 "hdgst": false, 00:22:15.120 "ddgst": false 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 },{ 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme6", 00:22:15.120 "trtype": "tcp", 00:22:15.120 "traddr": "10.0.0.2", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "4420", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:15.120 "hdgst": false, 00:22:15.120 "ddgst": false 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 },{ 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme7", 00:22:15.120 "trtype": "tcp", 00:22:15.120 "traddr": "10.0.0.2", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "4420", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:15.120 "hdgst": false, 00:22:15.120 "ddgst": false 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 },{ 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme8", 00:22:15.120 "trtype": "tcp", 00:22:15.120 "traddr": "10.0.0.2", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "4420", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:15.120 "hdgst": false, 00:22:15.120 "ddgst": false 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 },{ 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme9", 00:22:15.120 "trtype": "tcp", 00:22:15.120 "traddr": "10.0.0.2", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "4420", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:15.120 "hdgst": false, 00:22:15.120 "ddgst": false 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 },{ 00:22:15.120 "params": { 00:22:15.120 "name": "Nvme10", 00:22:15.120 "trtype": "tcp", 00:22:15.120 "traddr": "10.0.0.2", 00:22:15.120 "adrfam": "ipv4", 00:22:15.120 "trsvcid": "4420", 00:22:15.120 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:15.120 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:15.120 "hdgst": false, 00:22:15.120 "ddgst": false 00:22:15.120 }, 00:22:15.120 "method": "bdev_nvme_attach_controller" 00:22:15.120 }' 00:22:15.381 [2024-05-15 17:06:53.964119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.381 [2024-05-15 17:06:54.028764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.764 Running I/O for 10 seconds... 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:16.764 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:17.023 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:17.023 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:17.023 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:17.023 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:17.023 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.023 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:17.023 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.023 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:17.023 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:17.023 17:06:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:17.282 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:17.282 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:17.282 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:17.282 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:17.282 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.282 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1533344 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1533344 ']' 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1533344 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1533344 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1533344' 00:22:17.542 killing process with pid 1533344 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1533344 00:22:17.542 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1533344 00:22:17.542 Received shutdown signal, test time was about 0.971544 seconds 00:22:17.542 00:22:17.542 Latency(us) 00:22:17.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.542 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.542 Verification LBA range: start 0x0 length 0x400 00:22:17.542 Nvme1n1 : 0.94 203.82 12.74 0.00 0.00 310231.89 20097.71 253405.87 00:22:17.542 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.542 Verification LBA range: start 0x0 length 0x400 00:22:17.542 Nvme2n1 : 0.93 206.59 12.91 0.00 0.00 299840.57 21408.43 267386.88 00:22:17.542 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.542 Verification LBA range: start 0x0 length 0x400 00:22:17.542 Nvme3n1 : 0.96 269.82 16.86 0.00 0.00 224588.09 4068.69 222822.40 00:22:17.542 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.542 Verification LBA range: start 0x0 length 0x400 00:22:17.542 Nvme4n1 : 0.96 267.59 16.72 0.00 0.00 222100.05 20643.84 255153.49 00:22:17.542 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.542 Verification LBA range: start 0x0 length 0x400 00:22:17.542 Nvme5n1 : 0.97 261.68 16.35 0.00 0.00 222312.39 17803.95 222822.40 00:22:17.542 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.542 Verification LBA range: start 0x0 length 0x400 00:22:17.542 Nvme6n1 : 0.97 265.07 16.57 0.00 0.00 214947.20 20862.29 288358.40 00:22:17.542 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.542 Verification LBA range: start 0x0 length 0x400 00:22:17.542 Nvme7n1 : 0.94 212.98 13.31 0.00 0.00 257754.96 7427.41 241172.48 00:22:17.542 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.542 Verification LBA range: start 0x0 length 0x400 00:22:17.542 Nvme8n1 : 0.95 277.83 17.36 0.00 0.00 193687.95 7973.55 230686.72 00:22:17.542 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.542 Verification LBA range: start 0x0 length 0x400 00:22:17.542 Nvme9n1 : 0.96 266.14 16.63 0.00 0.00 199441.28 19660.80 246415.36 00:22:17.542 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.542 Verification LBA range: start 0x0 length 0x400 00:22:17.542 Nvme10n1 : 0.95 202.12 12.63 0.00 0.00 255786.67 16602.45 263891.63 00:22:17.542 =================================================================================================================== 00:22:17.542 Total : 2433.64 152.10 0.00 0.00 235484.47 4068.69 288358.40 00:22:17.802 17:06:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:18.743 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1533108 00:22:18.743 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:18.743 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:18.743 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:18.743 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:18.743 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:18.743 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:18.743 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:18.743 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:18.743 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:18.743 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:18.744 rmmod nvme_tcp 00:22:18.744 rmmod nvme_fabrics 00:22:18.744 rmmod nvme_keyring 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1533108 ']' 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1533108 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1533108 ']' 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1533108 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1533108 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1533108' 00:22:18.744 killing process with pid 1533108 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1533108 00:22:18.744 [2024-05-15 17:06:57.567989] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:18.744 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1533108 00:22:19.004 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:19.004 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:19.004 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:19.004 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.004 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:19.004 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.004 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.004 17:06:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.551 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:21.551 00:22:21.551 real 0m7.768s 00:22:21.551 user 0m23.097s 00:22:21.551 sys 0m1.268s 00:22:21.551 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:21.551 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.551 ************************************ 00:22:21.551 END TEST nvmf_shutdown_tc2 00:22:21.552 ************************************ 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:21.552 ************************************ 00:22:21.552 START TEST nvmf_shutdown_tc3 00:22:21.552 ************************************ 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:21.552 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:21.552 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:21.552 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:21.552 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.552 17:06:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.552 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.552 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:21.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:22:21.553 00:22:21.553 --- 10.0.0.2 ping statistics --- 00:22:21.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.553 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:22:21.553 00:22:21.553 --- 10.0.0.1 ping statistics --- 00:22:21.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.553 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1534786 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1534786 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1534786 ']' 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:21.553 17:07:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.553 [2024-05-15 17:07:00.363104] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:22:21.553 [2024-05-15 17:07:00.363176] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.814 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.814 [2024-05-15 17:07:00.454130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.814 [2024-05-15 17:07:00.516096] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.814 [2024-05-15 17:07:00.516127] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.814 [2024-05-15 17:07:00.516132] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.814 [2024-05-15 17:07:00.516137] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.814 [2024-05-15 17:07:00.516141] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.814 [2024-05-15 17:07:00.516249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.814 [2024-05-15 17:07:00.516407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.814 [2024-05-15 17:07:00.516591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:21.814 [2024-05-15 17:07:00.516598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.386 [2024-05-15 17:07:01.190782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.386 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.648 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.648 Malloc1 00:22:22.648 [2024-05-15 17:07:01.289364] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:22.648 [2024-05-15 17:07:01.289564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.648 Malloc2 00:22:22.648 Malloc3 00:22:22.648 Malloc4 00:22:22.648 Malloc5 00:22:22.648 Malloc6 00:22:22.910 Malloc7 00:22:22.910 Malloc8 00:22:22.910 Malloc9 00:22:22.910 Malloc10 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1535167 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1535167 /var/tmp/bdevperf.sock 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1535167 ']' 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:22.910 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:22.911 { 00:22:22.911 "params": { 00:22:22.911 "name": "Nvme$subsystem", 00:22:22.911 "trtype": "$TEST_TRANSPORT", 00:22:22.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.911 "adrfam": "ipv4", 00:22:22.911 "trsvcid": "$NVMF_PORT", 00:22:22.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.911 "hdgst": ${hdgst:-false}, 00:22:22.911 "ddgst": ${ddgst:-false} 00:22:22.911 }, 00:22:22.911 "method": "bdev_nvme_attach_controller" 00:22:22.911 } 00:22:22.911 EOF 00:22:22.911 )") 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:22.911 { 00:22:22.911 "params": { 00:22:22.911 "name": "Nvme$subsystem", 00:22:22.911 "trtype": "$TEST_TRANSPORT", 00:22:22.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.911 "adrfam": "ipv4", 00:22:22.911 "trsvcid": "$NVMF_PORT", 00:22:22.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.911 "hdgst": ${hdgst:-false}, 00:22:22.911 "ddgst": ${ddgst:-false} 00:22:22.911 }, 00:22:22.911 "method": "bdev_nvme_attach_controller" 00:22:22.911 } 00:22:22.911 EOF 00:22:22.911 )") 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:22.911 { 00:22:22.911 "params": { 00:22:22.911 "name": "Nvme$subsystem", 00:22:22.911 "trtype": "$TEST_TRANSPORT", 00:22:22.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.911 "adrfam": "ipv4", 00:22:22.911 "trsvcid": "$NVMF_PORT", 00:22:22.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.911 "hdgst": ${hdgst:-false}, 00:22:22.911 "ddgst": ${ddgst:-false} 00:22:22.911 }, 00:22:22.911 "method": "bdev_nvme_attach_controller" 00:22:22.911 } 00:22:22.911 EOF 00:22:22.911 )") 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:22.911 { 00:22:22.911 "params": { 00:22:22.911 "name": "Nvme$subsystem", 00:22:22.911 "trtype": "$TEST_TRANSPORT", 00:22:22.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.911 "adrfam": "ipv4", 00:22:22.911 "trsvcid": "$NVMF_PORT", 00:22:22.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.911 "hdgst": ${hdgst:-false}, 00:22:22.911 "ddgst": ${ddgst:-false} 00:22:22.911 }, 00:22:22.911 "method": "bdev_nvme_attach_controller" 00:22:22.911 } 00:22:22.911 EOF 00:22:22.911 )") 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:22.911 { 00:22:22.911 "params": { 00:22:22.911 "name": "Nvme$subsystem", 00:22:22.911 "trtype": "$TEST_TRANSPORT", 00:22:22.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.911 "adrfam": "ipv4", 00:22:22.911 "trsvcid": "$NVMF_PORT", 00:22:22.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.911 "hdgst": ${hdgst:-false}, 00:22:22.911 "ddgst": ${ddgst:-false} 00:22:22.911 }, 00:22:22.911 "method": "bdev_nvme_attach_controller" 00:22:22.911 } 00:22:22.911 EOF 00:22:22.911 )") 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:22.911 { 00:22:22.911 "params": { 00:22:22.911 "name": "Nvme$subsystem", 00:22:22.911 "trtype": "$TEST_TRANSPORT", 00:22:22.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.911 "adrfam": "ipv4", 00:22:22.911 "trsvcid": "$NVMF_PORT", 00:22:22.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.911 "hdgst": ${hdgst:-false}, 00:22:22.911 "ddgst": ${ddgst:-false} 00:22:22.911 }, 00:22:22.911 "method": "bdev_nvme_attach_controller" 00:22:22.911 } 00:22:22.911 EOF 00:22:22.911 )") 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:22.911 { 00:22:22.911 "params": { 00:22:22.911 "name": "Nvme$subsystem", 00:22:22.911 "trtype": "$TEST_TRANSPORT", 00:22:22.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.911 "adrfam": "ipv4", 00:22:22.911 "trsvcid": "$NVMF_PORT", 00:22:22.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.911 "hdgst": ${hdgst:-false}, 00:22:22.911 "ddgst": ${ddgst:-false} 00:22:22.911 }, 00:22:22.911 "method": "bdev_nvme_attach_controller" 00:22:22.911 } 00:22:22.911 EOF 00:22:22.911 )") 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:22.911 { 00:22:22.911 "params": { 00:22:22.911 "name": "Nvme$subsystem", 00:22:22.911 "trtype": "$TEST_TRANSPORT", 00:22:22.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.911 "adrfam": "ipv4", 00:22:22.911 "trsvcid": "$NVMF_PORT", 00:22:22.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.911 "hdgst": ${hdgst:-false}, 00:22:22.911 "ddgst": ${ddgst:-false} 00:22:22.911 }, 00:22:22.911 "method": "bdev_nvme_attach_controller" 00:22:22.911 } 00:22:22.911 EOF 00:22:22.911 )") 00:22:22.911 [2024-05-15 17:07:01.738498] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:22:22.911 [2024-05-15 17:07:01.738563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535167 ] 00:22:22.911 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:23.173 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.173 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.173 { 00:22:23.173 "params": { 00:22:23.173 "name": "Nvme$subsystem", 00:22:23.173 "trtype": "$TEST_TRANSPORT", 00:22:23.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.173 "adrfam": "ipv4", 00:22:23.173 "trsvcid": "$NVMF_PORT", 00:22:23.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.173 "hdgst": ${hdgst:-false}, 00:22:23.173 "ddgst": ${ddgst:-false} 00:22:23.173 }, 00:22:23.173 "method": "bdev_nvme_attach_controller" 00:22:23.173 } 00:22:23.173 EOF 00:22:23.173 )") 00:22:23.173 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:23.173 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.173 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.173 { 00:22:23.173 "params": { 00:22:23.173 "name": "Nvme$subsystem", 00:22:23.173 "trtype": "$TEST_TRANSPORT", 00:22:23.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.173 "adrfam": "ipv4", 00:22:23.173 "trsvcid": "$NVMF_PORT", 00:22:23.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.173 "hdgst": ${hdgst:-false}, 00:22:23.173 "ddgst": ${ddgst:-false} 00:22:23.173 }, 00:22:23.173 "method": "bdev_nvme_attach_controller" 00:22:23.173 } 00:22:23.173 EOF 00:22:23.173 )") 00:22:23.173 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:23.173 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:23.173 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.173 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:23.173 17:07:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:23.173 "params": { 00:22:23.173 "name": "Nvme1", 00:22:23.173 "trtype": "tcp", 00:22:23.173 "traddr": "10.0.0.2", 00:22:23.173 "adrfam": "ipv4", 00:22:23.173 "trsvcid": "4420", 00:22:23.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.173 "hdgst": false, 00:22:23.173 "ddgst": false 00:22:23.173 }, 00:22:23.173 "method": "bdev_nvme_attach_controller" 00:22:23.173 },{ 00:22:23.173 "params": { 00:22:23.173 "name": "Nvme2", 00:22:23.173 "trtype": "tcp", 00:22:23.173 "traddr": "10.0.0.2", 00:22:23.173 "adrfam": "ipv4", 00:22:23.173 "trsvcid": "4420", 00:22:23.173 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:23.173 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:23.173 "hdgst": false, 00:22:23.173 "ddgst": false 00:22:23.173 }, 00:22:23.173 "method": "bdev_nvme_attach_controller" 00:22:23.173 },{ 00:22:23.173 "params": { 00:22:23.173 "name": "Nvme3", 00:22:23.173 "trtype": "tcp", 00:22:23.173 "traddr": "10.0.0.2", 00:22:23.173 "adrfam": "ipv4", 00:22:23.173 "trsvcid": "4420", 00:22:23.173 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:23.173 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:23.173 "hdgst": false, 00:22:23.173 "ddgst": false 00:22:23.173 }, 00:22:23.173 "method": "bdev_nvme_attach_controller" 00:22:23.173 },{ 00:22:23.173 "params": { 00:22:23.173 "name": "Nvme4", 00:22:23.173 "trtype": "tcp", 00:22:23.173 "traddr": "10.0.0.2", 00:22:23.173 "adrfam": "ipv4", 00:22:23.173 "trsvcid": "4420", 00:22:23.173 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:23.173 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:23.173 "hdgst": false, 00:22:23.173 "ddgst": false 00:22:23.173 }, 00:22:23.173 "method": "bdev_nvme_attach_controller" 00:22:23.173 },{ 00:22:23.173 "params": { 00:22:23.173 "name": "Nvme5", 00:22:23.173 "trtype": "tcp", 00:22:23.173 "traddr": "10.0.0.2", 00:22:23.173 "adrfam": "ipv4", 00:22:23.173 "trsvcid": "4420", 00:22:23.173 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:23.173 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:23.173 "hdgst": false, 00:22:23.173 "ddgst": false 00:22:23.173 }, 00:22:23.173 "method": "bdev_nvme_attach_controller" 00:22:23.173 },{ 00:22:23.173 "params": { 00:22:23.173 "name": "Nvme6", 00:22:23.173 "trtype": "tcp", 00:22:23.173 "traddr": "10.0.0.2", 00:22:23.173 "adrfam": "ipv4", 00:22:23.173 "trsvcid": "4420", 00:22:23.173 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:23.173 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:23.173 "hdgst": false, 00:22:23.173 "ddgst": false 00:22:23.173 }, 00:22:23.173 "method": "bdev_nvme_attach_controller" 00:22:23.173 },{ 00:22:23.173 "params": { 00:22:23.173 "name": "Nvme7", 00:22:23.174 "trtype": "tcp", 00:22:23.174 "traddr": "10.0.0.2", 00:22:23.174 "adrfam": "ipv4", 00:22:23.174 "trsvcid": "4420", 00:22:23.174 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:23.174 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:23.174 "hdgst": false, 00:22:23.174 "ddgst": false 00:22:23.174 }, 00:22:23.174 "method": "bdev_nvme_attach_controller" 00:22:23.174 },{ 00:22:23.174 "params": { 00:22:23.174 "name": "Nvme8", 00:22:23.174 "trtype": "tcp", 00:22:23.174 "traddr": "10.0.0.2", 00:22:23.174 "adrfam": "ipv4", 00:22:23.174 "trsvcid": "4420", 00:22:23.174 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:23.174 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:23.174 "hdgst": false, 00:22:23.174 "ddgst": false 00:22:23.174 }, 00:22:23.174 "method": "bdev_nvme_attach_controller" 00:22:23.174 },{ 00:22:23.174 "params": { 00:22:23.174 "name": "Nvme9", 00:22:23.174 "trtype": "tcp", 00:22:23.174 "traddr": "10.0.0.2", 00:22:23.174 "adrfam": "ipv4", 00:22:23.174 "trsvcid": "4420", 00:22:23.174 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:23.174 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:23.174 "hdgst": false, 00:22:23.174 "ddgst": false 00:22:23.174 }, 00:22:23.174 "method": "bdev_nvme_attach_controller" 00:22:23.174 },{ 00:22:23.174 "params": { 00:22:23.174 "name": "Nvme10", 00:22:23.174 "trtype": "tcp", 00:22:23.174 "traddr": "10.0.0.2", 00:22:23.174 "adrfam": "ipv4", 00:22:23.174 "trsvcid": "4420", 00:22:23.174 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:23.174 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:23.174 "hdgst": false, 00:22:23.174 "ddgst": false 00:22:23.174 }, 00:22:23.174 "method": "bdev_nvme_attach_controller" 00:22:23.174 }' 00:22:23.174 [2024-05-15 17:07:01.798066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.174 [2024-05-15 17:07:01.863471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.560 Running I/O for 10 seconds... 00:22:24.560 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:24.560 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:22:24.560 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:24.560 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.560 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:24.822 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:25.083 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:25.083 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:25.083 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:25.083 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:25.083 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.083 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.083 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.083 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:25.083 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:25.083 17:07:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1534786 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 1534786 ']' 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 1534786 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:25.345 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1534786 00:22:25.622 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:25.622 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:25.622 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1534786' 00:22:25.622 killing process with pid 1534786 00:22:25.622 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 1534786 00:22:25.622 [2024-05-15 17:07:04.204248] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:25.622 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 1534786 00:22:25.622 [2024-05-15 17:07:04.204607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.622 [2024-05-15 17:07:04.204707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.204921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd07be0 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.623 [2024-05-15 17:07:04.205982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.205988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.205993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.205998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4eb20 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08080 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08080 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08080 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08080 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08080 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.206809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08080 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.624 [2024-05-15 17:07:04.207861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.207865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.207869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08520 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd089c0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.208998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd08e80 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.625 [2024-05-15 17:07:04.209869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.209998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.210079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d8a0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.626 [2024-05-15 17:07:04.211214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e1e0 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2184500 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cd200 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211719] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2162880 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.627 [2024-05-15 17:07:04.211799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1e20 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.627 [2024-05-15 17:07:04.211839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.627 [2024-05-15 17:07:04.211845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.211850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-05-15 17:07:04.211855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 he state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.211869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.211874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.211885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-05-15 17:07:04.211890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 he state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.211901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cf530 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.211933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with t[2024-05-15 17:07:04.211938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:22:25.628 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.211945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-05-15 17:07:04.211950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 he state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.211961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-05-15 17:07:04.211966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 he state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.211975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.211984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.211991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.211998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ae580 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.212020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd40 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.212103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6a610 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.212187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.628 [2024-05-15 17:07:04.212241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.212248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dab0 is same with the state(5) to be set 00:22:25.628 [2024-05-15 17:07:04.213592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.628 [2024-05-15 17:07:04.213613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.213629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.628 [2024-05-15 17:07:04.213638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.213647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.628 [2024-05-15 17:07:04.213654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.213667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.628 [2024-05-15 17:07:04.213675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.213684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.628 [2024-05-15 17:07:04.213692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.213701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.628 [2024-05-15 17:07:04.213709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.213718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.628 [2024-05-15 17:07:04.213725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.213734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.628 [2024-05-15 17:07:04.213741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.628 [2024-05-15 17:07:04.213751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.213989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.213998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.629 [2024-05-15 17:07:04.214414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.629 [2024-05-15 17:07:04.214421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214721] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2211100 was disconnected and freed. reset controller. 00:22:25.630 [2024-05-15 17:07:04.214839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.214974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.214986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.630 [2024-05-15 17:07:04.215268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.630 [2024-05-15 17:07:04.215277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.215284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.215294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.215300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.215311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.215319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.215328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.215335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.215344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.215351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.215360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.215367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.215376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.215383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.215392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.215399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.215408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.215415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.215424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.215431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.215440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.221829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.221968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4e680 is same with the state(5) to be set 00:22:25.631 [2024-05-15 17:07:04.231934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.231980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.231990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.231999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.232007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.232017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.232024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.232033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.232040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.232054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.232062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.232071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.232078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.232087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.232094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.232104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.232111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.232121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.232128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.232137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.232145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.232154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.232161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.232170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.631 [2024-05-15 17:07:04.232178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.631 [2024-05-15 17:07:04.232186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.232453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232522] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x215cd40 was disconnected and freed. reset controller. 00:22:25.632 [2024-05-15 17:07:04.232735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2184500 (9): Bad file descriptor 00:22:25.632 [2024-05-15 17:07:04.232759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cd200 (9): Bad file descriptor 00:22:25.632 [2024-05-15 17:07:04.232772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2162880 (9): Bad file descriptor 00:22:25.632 [2024-05-15 17:07:04.232787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1e20 (9): Bad file descriptor 00:22:25.632 [2024-05-15 17:07:04.232815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.632 [2024-05-15 17:07:04.232825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.632 [2024-05-15 17:07:04.232841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.632 [2024-05-15 17:07:04.232856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.632 [2024-05-15 17:07:04.232871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.232878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2226f20 is same with the state(5) to be set 00:22:25.632 [2024-05-15 17:07:04.232895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cf530 (9): Bad file descriptor 00:22:25.632 [2024-05-15 17:07:04.232908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ae580 (9): Bad file descriptor 00:22:25.632 [2024-05-15 17:07:04.232923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232cd40 (9): Bad file descriptor 00:22:25.632 [2024-05-15 17:07:04.232938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6a610 (9): Bad file descriptor 00:22:25.632 [2024-05-15 17:07:04.232954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218dab0 (9): Bad file descriptor 00:22:25.632 [2024-05-15 17:07:04.233043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.632 [2024-05-15 17:07:04.233305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.632 [2024-05-15 17:07:04.233314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.633 [2024-05-15 17:07:04.233986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.633 [2024-05-15 17:07:04.233996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.234003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.234012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.234019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.234028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.234035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.234044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.234051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.234060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.234067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.234077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.234084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.234093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.234100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.234110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.234117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.234167] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x220e700 was disconnected and freed. reset controller. 00:22:25.634 [2024-05-15 17:07:04.236835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:25.634 [2024-05-15 17:07:04.238332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:25.634 [2024-05-15 17:07:04.238357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:25.634 [2024-05-15 17:07:04.238878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.634 [2024-05-15 17:07:04.239266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.634 [2024-05-15 17:07:04.239281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2184500 with addr=10.0.0.2, port=4420 00:22:25.634 [2024-05-15 17:07:04.239291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2184500 is same with the state(5) to be set 00:22:25.634 [2024-05-15 17:07:04.239404] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:25.634 [2024-05-15 17:07:04.239734] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:25.634 [2024-05-15 17:07:04.239775] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:25.634 [2024-05-15 17:07:04.240170] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:25.634 [2024-05-15 17:07:04.240790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.634 [2024-05-15 17:07:04.241203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.634 [2024-05-15 17:07:04.241216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232cd40 with addr=10.0.0.2, port=4420 00:22:25.634 [2024-05-15 17:07:04.241226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd40 is same with the state(5) to be set 00:22:25.634 [2024-05-15 17:07:04.241791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.634 [2024-05-15 17:07:04.242041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.634 [2024-05-15 17:07:04.242055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d1e20 with addr=10.0.0.2, port=4420 00:22:25.634 [2024-05-15 17:07:04.242064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1e20 is same with the state(5) to be set 00:22:25.634 [2024-05-15 17:07:04.242079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2184500 (9): Bad file descriptor 00:22:25.634 [2024-05-15 17:07:04.242130] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:25.634 [2024-05-15 17:07:04.242475] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:25.634 [2024-05-15 17:07:04.242517] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:25.634 [2024-05-15 17:07:04.242538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232cd40 (9): Bad file descriptor 00:22:25.634 [2024-05-15 17:07:04.242572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1e20 (9): Bad file descriptor 00:22:25.634 [2024-05-15 17:07:04.242582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:25.634 [2024-05-15 17:07:04.242589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:25.634 [2024-05-15 17:07:04.242598] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:25.634 [2024-05-15 17:07:04.242701] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.634 [2024-05-15 17:07:04.242713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:25.634 [2024-05-15 17:07:04.242719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:25.634 [2024-05-15 17:07:04.242726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:25.634 [2024-05-15 17:07:04.242738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:25.634 [2024-05-15 17:07:04.242744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:25.634 [2024-05-15 17:07:04.242750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:25.634 [2024-05-15 17:07:04.242797] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.634 [2024-05-15 17:07:04.242814] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.634 [2024-05-15 17:07:04.242840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2226f20 (9): Bad file descriptor 00:22:25.634 [2024-05-15 17:07:04.242968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.242980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.242995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.634 [2024-05-15 17:07:04.243251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.634 [2024-05-15 17:07:04.243260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.635 [2024-05-15 17:07:04.243843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.635 [2024-05-15 17:07:04.243850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.243859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.243866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.243875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.243882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.243891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.243898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.243907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.243914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.243923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.243930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.243939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.243947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.243956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.243963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.243972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.243979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.243988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.243995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.244008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.244015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.244024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.244031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.244039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2294ee0 is same with the state(5) to be set 00:22:25.636 [2024-05-15 17:07:04.245318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.636 [2024-05-15 17:07:04.245809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.636 [2024-05-15 17:07:04.245816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.245826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.245833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.245842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.245850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.245859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.245866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.245875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.245883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.245892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.245899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.245909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.245915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.245926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.245933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.245943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.245950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.245959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.245966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.245975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.245982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.245991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.245998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.246388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.246396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fb70 is same with the state(5) to be set 00:22:25.637 [2024-05-15 17:07:04.247674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.247687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.637 [2024-05-15 17:07:04.247699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.637 [2024-05-15 17:07:04.247708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.247987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.247994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.638 [2024-05-15 17:07:04.248381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.638 [2024-05-15 17:07:04.248388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.248735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.248743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215a3e0 is same with the state(5) to be set 00:22:25.639 [2024-05-15 17:07:04.250002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.639 [2024-05-15 17:07:04.250295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.639 [2024-05-15 17:07:04.250305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.640 [2024-05-15 17:07:04.250744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.640 [2024-05-15 17:07:04.250753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.250983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.250990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.251000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.251007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.251016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.251023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.251033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.251040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.251049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.251056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.251064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215b840 is same with the state(5) to be set 00:22:25.641 [2024-05-15 17:07:04.252322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.641 [2024-05-15 17:07:04.252534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.641 [2024-05-15 17:07:04.252543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.252986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.252993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.253002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.253009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.253020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.253027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.253036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.253043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.253052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.253059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.253068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.253075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.253084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.253091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.253100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.642 [2024-05-15 17:07:04.253108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.642 [2024-05-15 17:07:04.253117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.253366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.253374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215e240 is same with the state(5) to be set 00:22:25.643 [2024-05-15 17:07:04.254643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.254992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.254999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.643 [2024-05-15 17:07:04.255009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.643 [2024-05-15 17:07:04.255016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.644 [2024-05-15 17:07:04.255636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.644 [2024-05-15 17:07:04.255643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.255653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.255660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.255669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.255676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.255685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.255692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.255701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.255708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.255716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e410 is same with the state(5) to be set 00:22:25.645 [2024-05-15 17:07:04.257676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.645 [2024-05-15 17:07:04.257703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:25.645 [2024-05-15 17:07:04.257717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:25.645 [2024-05-15 17:07:04.257726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:25.645 [2024-05-15 17:07:04.257795] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:25.645 [2024-05-15 17:07:04.257815] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:25.645 [2024-05-15 17:07:04.257894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:25.645 [2024-05-15 17:07:04.257905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:25.645 [2024-05-15 17:07:04.258353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.645 [2024-05-15 17:07:04.258711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.645 [2024-05-15 17:07:04.258722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2162880 with addr=10.0.0.2, port=4420 00:22:25.645 [2024-05-15 17:07:04.258730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2162880 is same with the state(5) to be set 00:22:25.645 [2024-05-15 17:07:04.259145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.645 [2024-05-15 17:07:04.259342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.645 [2024-05-15 17:07:04.259350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218dab0 with addr=10.0.0.2, port=4420 00:22:25.645 [2024-05-15 17:07:04.259358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dab0 is same with the state(5) to be set 00:22:25.645 [2024-05-15 17:07:04.259683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.645 [2024-05-15 17:07:04.259970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.645 [2024-05-15 17:07:04.259979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ae580 with addr=10.0.0.2, port=4420 00:22:25.645 [2024-05-15 17:07:04.259986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ae580 is same with the state(5) to be set 00:22:25.645 [2024-05-15 17:07:04.260309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.645 [2024-05-15 17:07:04.260373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.645 [2024-05-15 17:07:04.260383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6a610 with addr=10.0.0.2, port=4420 00:22:25.645 [2024-05-15 17:07:04.260390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6a610 is same with the state(5) to be set 00:22:25.645 [2024-05-15 17:07:04.261717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.645 [2024-05-15 17:07:04.261929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.645 [2024-05-15 17:07:04.261938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.261945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.261954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.261961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.261971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.261978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.261987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.261995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.646 [2024-05-15 17:07:04.262585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.646 [2024-05-15 17:07:04.262594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.647 [2024-05-15 17:07:04.262601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.647 [2024-05-15 17:07:04.262612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.647 [2024-05-15 17:07:04.262620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.647 [2024-05-15 17:07:04.262629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.647 [2024-05-15 17:07:04.262636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.647 [2024-05-15 17:07:04.262646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.647 [2024-05-15 17:07:04.262653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.647 [2024-05-15 17:07:04.262662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.647 [2024-05-15 17:07:04.262670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.647 [2024-05-15 17:07:04.262679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.647 [2024-05-15 17:07:04.262686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.647 [2024-05-15 17:07:04.262695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.647 [2024-05-15 17:07:04.262702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.647 [2024-05-15 17:07:04.262711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.647 [2024-05-15 17:07:04.262719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.647 [2024-05-15 17:07:04.262728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.647 [2024-05-15 17:07:04.262735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.647 [2024-05-15 17:07:04.262744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.647 [2024-05-15 17:07:04.262751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.647 [2024-05-15 17:07:04.262760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.647 [2024-05-15 17:07:04.262767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.647 [2024-05-15 17:07:04.262775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cf10 is same with the state(5) to be set 00:22:25.647 [2024-05-15 17:07:04.266465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:25.647 [2024-05-15 17:07:04.266489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:25.647 [2024-05-15 17:07:04.266498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:25.647 task offset: 27136 on job bdev=Nvme4n1 fails 00:22:25.647 00:22:25.647 Latency(us) 00:22:25.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.647 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.647 Job: Nvme1n1 ended in about 0.95 seconds with error 00:22:25.647 Verification LBA range: start 0x0 length 0x400 00:22:25.647 Nvme1n1 : 0.95 134.74 8.42 67.37 0.00 313127.54 15728.64 256901.12 00:22:25.647 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.647 Job: Nvme2n1 ended in about 0.94 seconds with error 00:22:25.647 Verification LBA range: start 0x0 length 0x400 00:22:25.647 Nvme2n1 : 0.94 203.65 12.73 67.88 0.00 228102.83 19333.12 251658.24 00:22:25.647 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.647 Job: Nvme3n1 ended in about 0.95 seconds with error 00:22:25.647 Verification LBA range: start 0x0 length 0x400 00:22:25.647 Nvme3n1 : 0.95 201.62 12.60 67.21 0.00 225589.23 11741.87 256901.12 00:22:25.647 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.647 Job: Nvme4n1 ended in about 0.94 seconds with error 00:22:25.647 Verification LBA range: start 0x0 length 0x400 00:22:25.647 Nvme4n1 : 0.94 204.23 12.76 68.08 0.00 217675.09 19988.48 253405.87 00:22:25.647 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.647 Job: Nvme5n1 ended in about 0.95 seconds with error 00:22:25.647 Verification LBA range: start 0x0 length 0x400 00:22:25.647 Nvme5n1 : 0.95 134.08 8.38 67.04 0.00 288778.81 19114.67 260396.37 00:22:25.647 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.647 Job: Nvme6n1 ended in about 0.96 seconds with error 00:22:25.647 Verification LBA range: start 0x0 length 0x400 00:22:25.647 Nvme6n1 : 0.96 200.63 12.54 66.88 0.00 212207.15 20971.52 230686.72 00:22:25.647 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.647 Job: Nvme7n1 ended in about 0.94 seconds with error 00:22:25.647 Verification LBA range: start 0x0 length 0x400 00:22:25.647 Nvme7n1 : 0.94 203.95 12.75 67.98 0.00 203512.53 22063.79 255153.49 00:22:25.647 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.647 Job: Nvme8n1 ended in about 0.96 seconds with error 00:22:25.647 Verification LBA range: start 0x0 length 0x400 00:22:25.647 Nvme8n1 : 0.96 139.69 8.73 66.72 0.00 262696.20 18350.08 284863.15 00:22:25.647 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.647 Job: Nvme9n1 ended in about 0.97 seconds with error 00:22:25.647 Verification LBA range: start 0x0 length 0x400 00:22:25.647 Nvme9n1 : 0.97 137.31 8.58 66.07 0.00 260811.66 18896.21 276125.01 00:22:25.647 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.647 Job: Nvme10n1 ended in about 0.96 seconds with error 00:22:25.647 Verification LBA range: start 0x0 length 0x400 00:22:25.647 Nvme10n1 : 0.96 133.11 8.32 66.56 0.00 258742.61 18786.99 251658.24 00:22:25.647 =================================================================================================================== 00:22:25.647 Total : 1693.00 105.81 671.78 0.00 242973.16 11741.87 284863.15 00:22:25.647 [2024-05-15 17:07:04.295102] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:25.647 [2024-05-15 17:07:04.295134] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:25.647 [2024-05-15 17:07:04.295596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.647 [2024-05-15 17:07:04.295958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.647 [2024-05-15 17:07:04.295967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cf530 with addr=10.0.0.2, port=4420 00:22:25.647 [2024-05-15 17:07:04.295977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cf530 is same with the state(5) to be set 00:22:25.647 [2024-05-15 17:07:04.296214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.647 [2024-05-15 17:07:04.296561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.647 [2024-05-15 17:07:04.296576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22cd200 with addr=10.0.0.2, port=4420 00:22:25.647 [2024-05-15 17:07:04.296583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cd200 is same with the state(5) to be set 00:22:25.647 [2024-05-15 17:07:04.296594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2162880 (9): Bad file descriptor 00:22:25.647 [2024-05-15 17:07:04.296605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218dab0 (9): Bad file descriptor 00:22:25.647 [2024-05-15 17:07:04.296615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ae580 (9): Bad file descriptor 00:22:25.648 [2024-05-15 17:07:04.296624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6a610 (9): Bad file descriptor 00:22:25.648 [2024-05-15 17:07:04.297087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.648 [2024-05-15 17:07:04.297407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.648 [2024-05-15 17:07:04.297417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2184500 with addr=10.0.0.2, port=4420 00:22:25.648 [2024-05-15 17:07:04.297424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2184500 is same with the state(5) to be set 00:22:25.648 [2024-05-15 17:07:04.297815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.648 [2024-05-15 17:07:04.297925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.648 [2024-05-15 17:07:04.297935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d1e20 with addr=10.0.0.2, port=4420 00:22:25.648 [2024-05-15 17:07:04.297942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1e20 is same with the state(5) to be set 00:22:25.648 [2024-05-15 17:07:04.298157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.648 [2024-05-15 17:07:04.298486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.648 [2024-05-15 17:07:04.298495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232cd40 with addr=10.0.0.2, port=4420 00:22:25.648 [2024-05-15 17:07:04.298502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232cd40 is same with the state(5) to be set 00:22:25.648 [2024-05-15 17:07:04.298835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.648 [2024-05-15 17:07:04.299168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.648 [2024-05-15 17:07:04.299177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2226f20 with addr=10.0.0.2, port=4420 00:22:25.648 [2024-05-15 17:07:04.299184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2226f20 is same with the state(5) to be set 00:22:25.648 [2024-05-15 17:07:04.299192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cf530 (9): Bad file descriptor 00:22:25.648 [2024-05-15 17:07:04.299202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cd200 (9): Bad file descriptor 00:22:25.648 [2024-05-15 17:07:04.299210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:25.648 [2024-05-15 17:07:04.299218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:25.648 [2024-05-15 17:07:04.299226] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:25.648 [2024-05-15 17:07:04.299238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:25.648 [2024-05-15 17:07:04.299245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:25.648 [2024-05-15 17:07:04.299251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:25.648 [2024-05-15 17:07:04.299261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:25.648 [2024-05-15 17:07:04.299271] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:25.648 [2024-05-15 17:07:04.299278] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:25.648 [2024-05-15 17:07:04.299288] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:25.648 [2024-05-15 17:07:04.299294] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:25.648 [2024-05-15 17:07:04.299300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:25.648 [2024-05-15 17:07:04.299321] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:25.648 [2024-05-15 17:07:04.299332] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:25.648 [2024-05-15 17:07:04.299342] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:25.648 [2024-05-15 17:07:04.299352] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:25.648 [2024-05-15 17:07:04.299362] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:25.648 [2024-05-15 17:07:04.299372] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:25.648 [2024-05-15 17:07:04.299718] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.648 [2024-05-15 17:07:04.299729] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.648 [2024-05-15 17:07:04.299735] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.648 [2024-05-15 17:07:04.299741] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.648 [2024-05-15 17:07:04.299749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2184500 (9): Bad file descriptor 00:22:25.648 [2024-05-15 17:07:04.299758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1e20 (9): Bad file descriptor 00:22:25.648 [2024-05-15 17:07:04.299767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232cd40 (9): Bad file descriptor 00:22:25.648 [2024-05-15 17:07:04.299776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2226f20 (9): Bad file descriptor 00:22:25.648 [2024-05-15 17:07:04.299784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:25.648 [2024-05-15 17:07:04.299791] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:25.648 [2024-05-15 17:07:04.299797] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:25.648 [2024-05-15 17:07:04.299807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:25.648 [2024-05-15 17:07:04.299813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:25.648 [2024-05-15 17:07:04.299819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:25.648 [2024-05-15 17:07:04.299860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.648 [2024-05-15 17:07:04.299867] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.648 [2024-05-15 17:07:04.299873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:25.648 [2024-05-15 17:07:04.299879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:25.648 [2024-05-15 17:07:04.299886] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:25.648 [2024-05-15 17:07:04.299898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:25.648 [2024-05-15 17:07:04.299905] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:25.648 [2024-05-15 17:07:04.299911] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:25.648 [2024-05-15 17:07:04.299921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:25.648 [2024-05-15 17:07:04.299927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:25.648 [2024-05-15 17:07:04.299933] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:25.648 [2024-05-15 17:07:04.299942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:25.648 [2024-05-15 17:07:04.299948] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:25.648 [2024-05-15 17:07:04.299955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:25.648 [2024-05-15 17:07:04.299987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.648 [2024-05-15 17:07:04.299994] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.648 [2024-05-15 17:07:04.300000] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.648 [2024-05-15 17:07:04.300006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.909 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:25.909 17:07:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1535167 00:22:26.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1535167) - No such process 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:26.873 rmmod nvme_tcp 00:22:26.873 rmmod nvme_fabrics 00:22:26.873 rmmod nvme_keyring 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.873 17:07:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.413 17:07:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.413 00:22:29.413 real 0m7.709s 00:22:29.413 user 0m18.629s 00:22:29.413 sys 0m1.196s 00:22:29.413 17:07:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:29.413 17:07:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:29.413 ************************************ 00:22:29.413 END TEST nvmf_shutdown_tc3 00:22:29.413 ************************************ 00:22:29.413 17:07:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:29.413 00:22:29.413 real 0m32.009s 00:22:29.413 user 1m14.443s 00:22:29.413 sys 0m9.126s 00:22:29.414 17:07:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:29.414 17:07:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:29.414 ************************************ 00:22:29.414 END TEST nvmf_shutdown 00:22:29.414 ************************************ 00:22:29.414 17:07:07 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:22:29.414 17:07:07 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.414 17:07:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:29.414 17:07:07 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:22:29.414 17:07:07 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:29.414 17:07:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:29.414 17:07:07 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:22:29.414 17:07:07 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:29.414 17:07:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:29.414 17:07:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:29.414 17:07:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:29.414 ************************************ 00:22:29.414 START TEST nvmf_multicontroller 00:22:29.414 ************************************ 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:29.414 * Looking for test storage... 00:22:29.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:29.414 17:07:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:36.025 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:36.025 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:36.025 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.025 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:36.026 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:36.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:22:36.026 00:22:36.026 --- 10.0.0.2 ping statistics --- 00:22:36.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.026 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:22:36.026 00:22:36.026 --- 10.0.0.1 ping statistics --- 00:22:36.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.026 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:36.026 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1539871 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1539871 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1539871 ']' 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:36.357 17:07:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.357 [2024-05-15 17:07:14.921098] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:22:36.357 [2024-05-15 17:07:14.921164] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.357 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.357 [2024-05-15 17:07:14.991760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:36.357 [2024-05-15 17:07:15.086857] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.357 [2024-05-15 17:07:15.086906] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.357 [2024-05-15 17:07:15.086915] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.357 [2024-05-15 17:07:15.086922] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.357 [2024-05-15 17:07:15.086928] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.357 [2024-05-15 17:07:15.087057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.357 [2024-05-15 17:07:15.087226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.357 [2024-05-15 17:07:15.087227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:36.928 [2024-05-15 17:07:15.752497] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.928 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.188 Malloc0 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.188 [2024-05-15 17:07:15.820018] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:37.188 [2024-05-15 17:07:15.820231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.188 [2024-05-15 17:07:15.832176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.188 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.189 Malloc1 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1540218 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1540218 /var/tmp/bdevperf.sock 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1540218 ']' 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:37.189 17:07:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.128 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:38.128 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:22:38.128 17:07:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:38.128 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.128 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.128 NVMe0n1 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.390 1 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.390 request: 00:22:38.390 { 00:22:38.390 "name": "NVMe0", 00:22:38.390 "trtype": "tcp", 00:22:38.390 "traddr": "10.0.0.2", 00:22:38.390 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:38.390 "hostaddr": "10.0.0.2", 00:22:38.390 "hostsvcid": "60000", 00:22:38.390 "adrfam": "ipv4", 00:22:38.390 "trsvcid": "4420", 00:22:38.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.390 "method": "bdev_nvme_attach_controller", 00:22:38.390 "req_id": 1 00:22:38.390 } 00:22:38.390 Got JSON-RPC error response 00:22:38.390 response: 00:22:38.390 { 00:22:38.390 "code": -114, 00:22:38.390 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:38.390 } 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:38.390 17:07:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.390 request: 00:22:38.390 { 00:22:38.390 "name": "NVMe0", 00:22:38.390 "trtype": "tcp", 00:22:38.390 "traddr": "10.0.0.2", 00:22:38.390 "hostaddr": "10.0.0.2", 00:22:38.390 "hostsvcid": "60000", 00:22:38.390 "adrfam": "ipv4", 00:22:38.390 "trsvcid": "4420", 00:22:38.390 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.390 "method": "bdev_nvme_attach_controller", 00:22:38.390 "req_id": 1 00:22:38.390 } 00:22:38.390 Got JSON-RPC error response 00:22:38.390 response: 00:22:38.390 { 00:22:38.390 "code": -114, 00:22:38.390 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:38.390 } 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.390 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.390 request: 00:22:38.390 { 00:22:38.390 "name": "NVMe0", 00:22:38.390 "trtype": "tcp", 00:22:38.390 "traddr": "10.0.0.2", 00:22:38.390 "hostaddr": "10.0.0.2", 00:22:38.390 "hostsvcid": "60000", 00:22:38.390 "adrfam": "ipv4", 00:22:38.390 "trsvcid": "4420", 00:22:38.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.390 "multipath": "disable", 00:22:38.390 "method": "bdev_nvme_attach_controller", 00:22:38.390 "req_id": 1 00:22:38.390 } 00:22:38.390 Got JSON-RPC error response 00:22:38.390 response: 00:22:38.390 { 00:22:38.390 "code": -114, 00:22:38.390 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:38.390 } 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.391 request: 00:22:38.391 { 00:22:38.391 "name": "NVMe0", 00:22:38.391 "trtype": "tcp", 00:22:38.391 "traddr": "10.0.0.2", 00:22:38.391 "hostaddr": "10.0.0.2", 00:22:38.391 "hostsvcid": "60000", 00:22:38.391 "adrfam": "ipv4", 00:22:38.391 "trsvcid": "4420", 00:22:38.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.391 "multipath": "failover", 00:22:38.391 "method": "bdev_nvme_attach_controller", 00:22:38.391 "req_id": 1 00:22:38.391 } 00:22:38.391 Got JSON-RPC error response 00:22:38.391 response: 00:22:38.391 { 00:22:38.391 "code": -114, 00:22:38.391 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:38.391 } 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.391 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.651 00:22:38.651 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.651 17:07:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.651 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.651 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.651 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.651 17:07:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:38.651 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.651 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.912 00:22:38.912 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.912 17:07:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:38.912 17:07:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:38.912 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.912 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.912 17:07:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.912 17:07:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:38.912 17:07:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:39.852 0 00:22:39.852 17:07:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:39.852 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.852 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:39.852 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.852 17:07:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1540218 00:22:39.852 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1540218 ']' 00:22:39.852 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1540218 00:22:39.852 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:22:39.852 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:39.852 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1540218 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1540218' 00:22:40.113 killing process with pid 1540218 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1540218 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1540218 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:22:40.113 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:22:40.113 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:40.113 [2024-05-15 17:07:15.950248] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:22:40.113 [2024-05-15 17:07:15.950306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540218 ] 00:22:40.113 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.113 [2024-05-15 17:07:16.009138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.113 [2024-05-15 17:07:16.073803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.113 [2024-05-15 17:07:17.492469] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 21b60db8-44c6-4b61-a96d-2e93e55b2acc already exists 00:22:40.113 [2024-05-15 17:07:17.492499] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:21b60db8-44c6-4b61-a96d-2e93e55b2acc alias for bdev NVMe1n1 00:22:40.113 [2024-05-15 17:07:17.492509] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:40.113 Running I/O for 1 seconds... 00:22:40.113 00:22:40.113 Latency(us) 00:22:40.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.113 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:40.113 NVMe0n1 : 1.00 29120.25 113.75 0.00 0.00 4386.93 2430.29 16930.13 00:22:40.114 =================================================================================================================== 00:22:40.114 Total : 29120.25 113.75 0.00 0.00 4386.93 2430.29 16930.13 00:22:40.114 Received shutdown signal, test time was about 1.000000 seconds 00:22:40.114 00:22:40.114 Latency(us) 00:22:40.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.114 =================================================================================================================== 00:22:40.114 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.114 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.114 rmmod nvme_tcp 00:22:40.114 rmmod nvme_fabrics 00:22:40.114 rmmod nvme_keyring 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1539871 ']' 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1539871 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1539871 ']' 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1539871 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:40.114 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1539871 00:22:40.374 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:40.374 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:40.374 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1539871' 00:22:40.374 killing process with pid 1539871 00:22:40.374 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1539871 00:22:40.374 [2024-05-15 17:07:18.990631] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:40.374 17:07:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1539871 00:22:40.374 17:07:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.374 17:07:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.374 17:07:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.374 17:07:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.374 17:07:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.374 17:07:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.374 17:07:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.374 17:07:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.917 17:07:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.917 00:22:42.917 real 0m13.461s 00:22:42.917 user 0m17.305s 00:22:42.917 sys 0m5.952s 00:22:42.917 17:07:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:42.917 17:07:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:42.917 ************************************ 00:22:42.917 END TEST nvmf_multicontroller 00:22:42.917 ************************************ 00:22:42.917 17:07:21 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:42.917 17:07:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:42.917 17:07:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:42.917 17:07:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.917 ************************************ 00:22:42.917 START TEST nvmf_aer 00:22:42.917 ************************************ 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:42.917 * Looking for test storage... 00:22:42.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.917 17:07:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.502 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.502 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:49.502 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:49.502 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:49.502 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:49.502 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:49.503 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:49.503 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:49.503 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:49.503 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.503 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:49.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:22:49.764 00:22:49.764 --- 10.0.0.2 ping statistics --- 00:22:49.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.764 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:22:49.764 00:22:49.764 --- 10.0.0.1 ping statistics --- 00:22:49.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.764 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1544857 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1544857 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 1544857 ']' 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.764 17:07:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:49.764 [2024-05-15 17:07:28.582496] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:22:49.764 [2024-05-15 17:07:28.582557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.073 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.073 [2024-05-15 17:07:28.651504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.073 [2024-05-15 17:07:28.723056] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.073 [2024-05-15 17:07:28.723096] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.073 [2024-05-15 17:07:28.723104] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.073 [2024-05-15 17:07:28.723110] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.073 [2024-05-15 17:07:28.723116] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.073 [2024-05-15 17:07:28.723258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.073 [2024-05-15 17:07:28.723371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.073 [2024-05-15 17:07:28.723528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.073 [2024-05-15 17:07:28.723530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.645 [2024-05-15 17:07:29.405156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.645 Malloc0 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.645 [2024-05-15 17:07:29.464470] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:50.645 [2024-05-15 17:07:29.464714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.645 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:50.645 [ 00:22:50.645 { 00:22:50.645 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:50.645 "subtype": "Discovery", 00:22:50.645 "listen_addresses": [], 00:22:50.645 "allow_any_host": true, 00:22:50.645 "hosts": [] 00:22:50.645 }, 00:22:50.645 { 00:22:50.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.645 "subtype": "NVMe", 00:22:50.645 "listen_addresses": [ 00:22:50.645 { 00:22:50.645 "trtype": "TCP", 00:22:50.645 "adrfam": "IPv4", 00:22:50.905 "traddr": "10.0.0.2", 00:22:50.905 "trsvcid": "4420" 00:22:50.905 } 00:22:50.905 ], 00:22:50.905 "allow_any_host": true, 00:22:50.905 "hosts": [], 00:22:50.905 "serial_number": "SPDK00000000000001", 00:22:50.905 "model_number": "SPDK bdev Controller", 00:22:50.905 "max_namespaces": 2, 00:22:50.905 "min_cntlid": 1, 00:22:50.905 "max_cntlid": 65519, 00:22:50.905 "namespaces": [ 00:22:50.905 { 00:22:50.905 "nsid": 1, 00:22:50.905 "bdev_name": "Malloc0", 00:22:50.905 "name": "Malloc0", 00:22:50.905 "nguid": "8D83B88D774D4667A67314E5669D0E55", 00:22:50.905 "uuid": "8d83b88d-774d-4667-a673-14e5669d0e55" 00:22:50.905 } 00:22:50.905 ] 00:22:50.905 } 00:22:50.905 ] 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1544962 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:22:50.905 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:22:50.905 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:22:51.165 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:51.165 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:51.165 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:22:51.165 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:51.165 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.165 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:51.165 Malloc1 00:22:51.165 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.165 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:51.165 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.165 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:51.166 Asynchronous Event Request test 00:22:51.166 Attaching to 10.0.0.2 00:22:51.166 Attached to 10.0.0.2 00:22:51.166 Registering asynchronous event callbacks... 00:22:51.166 Starting namespace attribute notice tests for all controllers... 00:22:51.166 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:51.166 aer_cb - Changed Namespace 00:22:51.166 Cleaning up... 00:22:51.166 [ 00:22:51.166 { 00:22:51.166 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:51.166 "subtype": "Discovery", 00:22:51.166 "listen_addresses": [], 00:22:51.166 "allow_any_host": true, 00:22:51.166 "hosts": [] 00:22:51.166 }, 00:22:51.166 { 00:22:51.166 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.166 "subtype": "NVMe", 00:22:51.166 "listen_addresses": [ 00:22:51.166 { 00:22:51.166 "trtype": "TCP", 00:22:51.166 "adrfam": "IPv4", 00:22:51.166 "traddr": "10.0.0.2", 00:22:51.166 "trsvcid": "4420" 00:22:51.166 } 00:22:51.166 ], 00:22:51.166 "allow_any_host": true, 00:22:51.166 "hosts": [], 00:22:51.166 "serial_number": "SPDK00000000000001", 00:22:51.166 "model_number": "SPDK bdev Controller", 00:22:51.166 "max_namespaces": 2, 00:22:51.166 "min_cntlid": 1, 00:22:51.166 "max_cntlid": 65519, 00:22:51.166 "namespaces": [ 00:22:51.166 { 00:22:51.166 "nsid": 1, 00:22:51.166 "bdev_name": "Malloc0", 00:22:51.166 "name": "Malloc0", 00:22:51.166 "nguid": "8D83B88D774D4667A67314E5669D0E55", 00:22:51.166 "uuid": "8d83b88d-774d-4667-a673-14e5669d0e55" 00:22:51.166 }, 00:22:51.166 { 00:22:51.166 "nsid": 2, 00:22:51.166 "bdev_name": "Malloc1", 00:22:51.166 "name": "Malloc1", 00:22:51.166 "nguid": "B62117D0B20F4FD59DEEDCE05311A788", 00:22:51.166 "uuid": "b62117d0-b20f-4fd5-9dee-dce05311a788" 00:22:51.166 } 00:22:51.166 ] 00:22:51.166 } 00:22:51.166 ] 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1544962 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:51.166 rmmod nvme_tcp 00:22:51.166 rmmod nvme_fabrics 00:22:51.166 rmmod nvme_keyring 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1544857 ']' 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1544857 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 1544857 ']' 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 1544857 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:51.166 17:07:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1544857 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1544857' 00:22:51.426 killing process with pid 1544857 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 1544857 00:22:51.426 [2024-05-15 17:07:30.042632] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 1544857 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.426 17:07:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.969 17:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.969 00:22:53.969 real 0m10.977s 00:22:53.969 user 0m7.958s 00:22:53.969 sys 0m5.628s 00:22:53.969 17:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:53.969 17:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:53.969 ************************************ 00:22:53.969 END TEST nvmf_aer 00:22:53.969 ************************************ 00:22:53.969 17:07:32 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:53.969 17:07:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:53.969 17:07:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:53.969 17:07:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.969 ************************************ 00:22:53.969 START TEST nvmf_async_init 00:22:53.969 ************************************ 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:53.969 * Looking for test storage... 00:22:53.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:53.969 17:07:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=242060f2ab764c3bb3a4ef284004ad65 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.970 17:07:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:00.556 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:00.556 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:00.556 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:00.556 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:23:00.556 00:23:00.556 --- 10.0.0.2 ping statistics --- 00:23:00.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.556 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:23:00.556 00:23:00.556 --- 10.0.0.1 ping statistics --- 00:23:00.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.556 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:00.556 17:07:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.817 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1549177 00:23:00.817 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1549177 00:23:00.817 17:07:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 1549177 ']' 00:23:00.817 17:07:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.817 17:07:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:00.817 17:07:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.817 17:07:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:00.817 17:07:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:00.817 17:07:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:00.817 [2024-05-15 17:07:39.451183] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:23:00.817 [2024-05-15 17:07:39.451270] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.817 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.817 [2024-05-15 17:07:39.523350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.817 [2024-05-15 17:07:39.596753] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.817 [2024-05-15 17:07:39.596793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.817 [2024-05-15 17:07:39.596801] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.817 [2024-05-15 17:07:39.596808] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.817 [2024-05-15 17:07:39.596813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.817 [2024-05-15 17:07:39.596831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.388 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:01.388 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:23:01.388 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.388 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:01.388 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.648 [2024-05-15 17:07:40.255486] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.648 null0 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:01.648 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 242060f2ab764c3bb3a4ef284004ad65 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.649 [2024-05-15 17:07:40.315600] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:01.649 [2024-05-15 17:07:40.315788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.649 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.909 nvme0n1 00:23:01.909 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.909 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:01.909 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.909 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.909 [ 00:23:01.909 { 00:23:01.909 "name": "nvme0n1", 00:23:01.909 "aliases": [ 00:23:01.909 "242060f2-ab76-4c3b-b3a4-ef284004ad65" 00:23:01.909 ], 00:23:01.909 "product_name": "NVMe disk", 00:23:01.909 "block_size": 512, 00:23:01.909 "num_blocks": 2097152, 00:23:01.909 "uuid": "242060f2-ab76-4c3b-b3a4-ef284004ad65", 00:23:01.909 "assigned_rate_limits": { 00:23:01.909 "rw_ios_per_sec": 0, 00:23:01.909 "rw_mbytes_per_sec": 0, 00:23:01.909 "r_mbytes_per_sec": 0, 00:23:01.909 "w_mbytes_per_sec": 0 00:23:01.909 }, 00:23:01.909 "claimed": false, 00:23:01.909 "zoned": false, 00:23:01.909 "supported_io_types": { 00:23:01.909 "read": true, 00:23:01.909 "write": true, 00:23:01.909 "unmap": false, 00:23:01.909 "write_zeroes": true, 00:23:01.909 "flush": true, 00:23:01.909 "reset": true, 00:23:01.909 "compare": true, 00:23:01.909 "compare_and_write": true, 00:23:01.909 "abort": true, 00:23:01.909 "nvme_admin": true, 00:23:01.909 "nvme_io": true 00:23:01.909 }, 00:23:01.910 "memory_domains": [ 00:23:01.910 { 00:23:01.910 "dma_device_id": "system", 00:23:01.910 "dma_device_type": 1 00:23:01.910 } 00:23:01.910 ], 00:23:01.910 "driver_specific": { 00:23:01.910 "nvme": [ 00:23:01.910 { 00:23:01.910 "trid": { 00:23:01.910 "trtype": "TCP", 00:23:01.910 "adrfam": "IPv4", 00:23:01.910 "traddr": "10.0.0.2", 00:23:01.910 "trsvcid": "4420", 00:23:01.910 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:01.910 }, 00:23:01.910 "ctrlr_data": { 00:23:01.910 "cntlid": 1, 00:23:01.910 "vendor_id": "0x8086", 00:23:01.910 "model_number": "SPDK bdev Controller", 00:23:01.910 "serial_number": "00000000000000000000", 00:23:01.910 "firmware_revision": "24.05", 00:23:01.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.910 "oacs": { 00:23:01.910 "security": 0, 00:23:01.910 "format": 0, 00:23:01.910 "firmware": 0, 00:23:01.910 "ns_manage": 0 00:23:01.910 }, 00:23:01.910 "multi_ctrlr": true, 00:23:01.910 "ana_reporting": false 00:23:01.910 }, 00:23:01.910 "vs": { 00:23:01.910 "nvme_version": "1.3" 00:23:01.910 }, 00:23:01.910 "ns_data": { 00:23:01.910 "id": 1, 00:23:01.910 "can_share": true 00:23:01.910 } 00:23:01.910 } 00:23:01.910 ], 00:23:01.910 "mp_policy": "active_passive" 00:23:01.910 } 00:23:01.910 } 00:23:01.910 ] 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.910 [2024-05-15 17:07:40.585601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.910 [2024-05-15 17:07:40.585661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x755e60 (9): Bad file descriptor 00:23:01.910 [2024-05-15 17:07:40.717640] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.910 [ 00:23:01.910 { 00:23:01.910 "name": "nvme0n1", 00:23:01.910 "aliases": [ 00:23:01.910 "242060f2-ab76-4c3b-b3a4-ef284004ad65" 00:23:01.910 ], 00:23:01.910 "product_name": "NVMe disk", 00:23:01.910 "block_size": 512, 00:23:01.910 "num_blocks": 2097152, 00:23:01.910 "uuid": "242060f2-ab76-4c3b-b3a4-ef284004ad65", 00:23:01.910 "assigned_rate_limits": { 00:23:01.910 "rw_ios_per_sec": 0, 00:23:01.910 "rw_mbytes_per_sec": 0, 00:23:01.910 "r_mbytes_per_sec": 0, 00:23:01.910 "w_mbytes_per_sec": 0 00:23:01.910 }, 00:23:01.910 "claimed": false, 00:23:01.910 "zoned": false, 00:23:01.910 "supported_io_types": { 00:23:01.910 "read": true, 00:23:01.910 "write": true, 00:23:01.910 "unmap": false, 00:23:01.910 "write_zeroes": true, 00:23:01.910 "flush": true, 00:23:01.910 "reset": true, 00:23:01.910 "compare": true, 00:23:01.910 "compare_and_write": true, 00:23:01.910 "abort": true, 00:23:01.910 "nvme_admin": true, 00:23:01.910 "nvme_io": true 00:23:01.910 }, 00:23:01.910 "memory_domains": [ 00:23:01.910 { 00:23:01.910 "dma_device_id": "system", 00:23:01.910 "dma_device_type": 1 00:23:01.910 } 00:23:01.910 ], 00:23:01.910 "driver_specific": { 00:23:01.910 "nvme": [ 00:23:01.910 { 00:23:01.910 "trid": { 00:23:01.910 "trtype": "TCP", 00:23:01.910 "adrfam": "IPv4", 00:23:01.910 "traddr": "10.0.0.2", 00:23:01.910 "trsvcid": "4420", 00:23:01.910 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:01.910 }, 00:23:01.910 "ctrlr_data": { 00:23:01.910 "cntlid": 2, 00:23:01.910 "vendor_id": "0x8086", 00:23:01.910 "model_number": "SPDK bdev Controller", 00:23:01.910 "serial_number": "00000000000000000000", 00:23:01.910 "firmware_revision": "24.05", 00:23:01.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.910 "oacs": { 00:23:01.910 "security": 0, 00:23:01.910 "format": 0, 00:23:01.910 "firmware": 0, 00:23:01.910 "ns_manage": 0 00:23:01.910 }, 00:23:01.910 "multi_ctrlr": true, 00:23:01.910 "ana_reporting": false 00:23:01.910 }, 00:23:01.910 "vs": { 00:23:01.910 "nvme_version": "1.3" 00:23:01.910 }, 00:23:01.910 "ns_data": { 00:23:01.910 "id": 1, 00:23:01.910 "can_share": true 00:23:01.910 } 00:23:01.910 } 00:23:01.910 ], 00:23:01.910 "mp_policy": "active_passive" 00:23:01.910 } 00:23:01.910 } 00:23:01.910 ] 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.910 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.9qq1ciaws2 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.9qq1ciaws2 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.171 [2024-05-15 17:07:40.790236] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:02.171 [2024-05-15 17:07:40.790346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9qq1ciaws2 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.171 [2024-05-15 17:07:40.802260] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9qq1ciaws2 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.171 [2024-05-15 17:07:40.814295] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.171 [2024-05-15 17:07:40.814330] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:02.171 nvme0n1 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.171 [ 00:23:02.171 { 00:23:02.171 "name": "nvme0n1", 00:23:02.171 "aliases": [ 00:23:02.171 "242060f2-ab76-4c3b-b3a4-ef284004ad65" 00:23:02.171 ], 00:23:02.171 "product_name": "NVMe disk", 00:23:02.171 "block_size": 512, 00:23:02.171 "num_blocks": 2097152, 00:23:02.171 "uuid": "242060f2-ab76-4c3b-b3a4-ef284004ad65", 00:23:02.171 "assigned_rate_limits": { 00:23:02.171 "rw_ios_per_sec": 0, 00:23:02.171 "rw_mbytes_per_sec": 0, 00:23:02.171 "r_mbytes_per_sec": 0, 00:23:02.171 "w_mbytes_per_sec": 0 00:23:02.171 }, 00:23:02.171 "claimed": false, 00:23:02.171 "zoned": false, 00:23:02.171 "supported_io_types": { 00:23:02.171 "read": true, 00:23:02.171 "write": true, 00:23:02.171 "unmap": false, 00:23:02.171 "write_zeroes": true, 00:23:02.171 "flush": true, 00:23:02.171 "reset": true, 00:23:02.171 "compare": true, 00:23:02.171 "compare_and_write": true, 00:23:02.171 "abort": true, 00:23:02.171 "nvme_admin": true, 00:23:02.171 "nvme_io": true 00:23:02.171 }, 00:23:02.171 "memory_domains": [ 00:23:02.171 { 00:23:02.171 "dma_device_id": "system", 00:23:02.171 "dma_device_type": 1 00:23:02.171 } 00:23:02.171 ], 00:23:02.171 "driver_specific": { 00:23:02.171 "nvme": [ 00:23:02.171 { 00:23:02.171 "trid": { 00:23:02.171 "trtype": "TCP", 00:23:02.171 "adrfam": "IPv4", 00:23:02.171 "traddr": "10.0.0.2", 00:23:02.171 "trsvcid": "4421", 00:23:02.171 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:02.171 }, 00:23:02.171 "ctrlr_data": { 00:23:02.171 "cntlid": 3, 00:23:02.171 "vendor_id": "0x8086", 00:23:02.171 "model_number": "SPDK bdev Controller", 00:23:02.171 "serial_number": "00000000000000000000", 00:23:02.171 "firmware_revision": "24.05", 00:23:02.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:02.171 "oacs": { 00:23:02.171 "security": 0, 00:23:02.171 "format": 0, 00:23:02.171 "firmware": 0, 00:23:02.171 "ns_manage": 0 00:23:02.171 }, 00:23:02.171 "multi_ctrlr": true, 00:23:02.171 "ana_reporting": false 00:23:02.171 }, 00:23:02.171 "vs": { 00:23:02.171 "nvme_version": "1.3" 00:23:02.171 }, 00:23:02.171 "ns_data": { 00:23:02.171 "id": 1, 00:23:02.171 "can_share": true 00:23:02.171 } 00:23:02.171 } 00:23:02.171 ], 00:23:02.171 "mp_policy": "active_passive" 00:23:02.171 } 00:23:02.171 } 00:23:02.171 ] 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.9qq1ciaws2 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.171 rmmod nvme_tcp 00:23:02.171 rmmod nvme_fabrics 00:23:02.171 rmmod nvme_keyring 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1549177 ']' 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1549177 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 1549177 ']' 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 1549177 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:23:02.171 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:02.172 17:07:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1549177 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1549177' 00:23:02.432 killing process with pid 1549177 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 1549177 00:23:02.432 [2024-05-15 17:07:41.038962] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:02.432 [2024-05-15 17:07:41.038988] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:02.432 [2024-05-15 17:07:41.038997] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 1549177 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.432 17:07:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.978 17:07:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:04.978 00:23:04.978 real 0m10.938s 00:23:04.978 user 0m3.863s 00:23:04.978 sys 0m5.508s 00:23:04.978 17:07:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:04.978 17:07:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:04.978 ************************************ 00:23:04.978 END TEST nvmf_async_init 00:23:04.978 ************************************ 00:23:04.978 17:07:43 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:04.978 17:07:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:04.978 17:07:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:04.978 17:07:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:04.978 ************************************ 00:23:04.978 START TEST dma 00:23:04.978 ************************************ 00:23:04.978 17:07:43 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:04.978 * Looking for test storage... 00:23:04.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:04.978 17:07:43 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.978 17:07:43 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.978 17:07:43 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.978 17:07:43 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.978 17:07:43 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.978 17:07:43 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.978 17:07:43 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.978 17:07:43 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:04.978 17:07:43 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.978 17:07:43 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.978 17:07:43 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:04.978 17:07:43 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:04.978 00:23:04.978 real 0m0.131s 00:23:04.978 user 0m0.064s 00:23:04.978 sys 0m0.075s 00:23:04.978 17:07:43 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:04.978 17:07:43 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:04.978 ************************************ 00:23:04.978 END TEST dma 00:23:04.978 ************************************ 00:23:04.978 17:07:43 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:04.978 17:07:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:04.978 17:07:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:04.978 17:07:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:04.978 ************************************ 00:23:04.978 START TEST nvmf_identify 00:23:04.978 ************************************ 00:23:04.978 17:07:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:04.978 * Looking for test storage... 00:23:04.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:04.978 17:07:43 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.978 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:04.978 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.978 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.978 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.978 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.978 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.978 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.978 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.978 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:04.979 17:07:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:11.683 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:11.683 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:11.683 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:11.683 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.683 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:11.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:23:11.944 00:23:11.944 --- 10.0.0.2 ping statistics --- 00:23:11.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.944 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:23:11.944 00:23:11.944 --- 10.0.0.1 ping statistics --- 00:23:11.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.944 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:11.944 17:07:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.206 17:07:50 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1553541 00:23:12.206 17:07:50 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:12.206 17:07:50 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:12.206 17:07:50 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1553541 00:23:12.206 17:07:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 1553541 ']' 00:23:12.206 17:07:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.206 17:07:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.206 17:07:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.206 17:07:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.206 17:07:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:12.206 [2024-05-15 17:07:50.839445] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:23:12.206 [2024-05-15 17:07:50.839516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.206 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.206 [2024-05-15 17:07:50.911857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.206 [2024-05-15 17:07:50.989024] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.206 [2024-05-15 17:07:50.989061] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.206 [2024-05-15 17:07:50.989069] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.206 [2024-05-15 17:07:50.989075] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.206 [2024-05-15 17:07:50.989080] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.206 [2024-05-15 17:07:50.989218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.206 [2024-05-15 17:07:50.989346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.206 [2024-05-15 17:07:50.989506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.206 [2024-05-15 17:07:50.989507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:13.152 [2024-05-15 17:07:51.622922] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:13.152 Malloc0 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:13.152 [2024-05-15 17:07:51.722204] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:13.152 [2024-05-15 17:07:51.722427] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:13.152 [ 00:23:13.152 { 00:23:13.152 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:13.152 "subtype": "Discovery", 00:23:13.152 "listen_addresses": [ 00:23:13.152 { 00:23:13.152 "trtype": "TCP", 00:23:13.152 "adrfam": "IPv4", 00:23:13.152 "traddr": "10.0.0.2", 00:23:13.152 "trsvcid": "4420" 00:23:13.152 } 00:23:13.152 ], 00:23:13.152 "allow_any_host": true, 00:23:13.152 "hosts": [] 00:23:13.152 }, 00:23:13.152 { 00:23:13.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.152 "subtype": "NVMe", 00:23:13.152 "listen_addresses": [ 00:23:13.152 { 00:23:13.152 "trtype": "TCP", 00:23:13.152 "adrfam": "IPv4", 00:23:13.152 "traddr": "10.0.0.2", 00:23:13.152 "trsvcid": "4420" 00:23:13.152 } 00:23:13.152 ], 00:23:13.152 "allow_any_host": true, 00:23:13.152 "hosts": [], 00:23:13.152 "serial_number": "SPDK00000000000001", 00:23:13.152 "model_number": "SPDK bdev Controller", 00:23:13.152 "max_namespaces": 32, 00:23:13.152 "min_cntlid": 1, 00:23:13.152 "max_cntlid": 65519, 00:23:13.152 "namespaces": [ 00:23:13.152 { 00:23:13.152 "nsid": 1, 00:23:13.152 "bdev_name": "Malloc0", 00:23:13.152 "name": "Malloc0", 00:23:13.152 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:13.152 "eui64": "ABCDEF0123456789", 00:23:13.152 "uuid": "fec3a933-27f8-4199-ad86-28140df15f21" 00:23:13.152 } 00:23:13.152 ] 00:23:13.152 } 00:23:13.152 ] 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.152 17:07:51 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:13.152 [2024-05-15 17:07:51.784710] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:23:13.152 [2024-05-15 17:07:51.784781] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553881 ] 00:23:13.152 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.152 [2024-05-15 17:07:51.818213] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:13.152 [2024-05-15 17:07:51.818259] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:13.152 [2024-05-15 17:07:51.818264] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:13.152 [2024-05-15 17:07:51.818276] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:13.152 [2024-05-15 17:07:51.818283] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:13.152 [2024-05-15 17:07:51.821581] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:13.152 [2024-05-15 17:07:51.821615] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e3cc30 0 00:23:13.152 [2024-05-15 17:07:51.829554] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:13.152 [2024-05-15 17:07:51.829569] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:13.152 [2024-05-15 17:07:51.829577] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:13.152 [2024-05-15 17:07:51.829580] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:13.152 [2024-05-15 17:07:51.829615] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.152 [2024-05-15 17:07:51.829621] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.152 [2024-05-15 17:07:51.829625] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3cc30) 00:23:13.152 [2024-05-15 17:07:51.829638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:13.152 [2024-05-15 17:07:51.829654] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4980, cid 0, qid 0 00:23:13.152 [2024-05-15 17:07:51.837555] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.152 [2024-05-15 17:07:51.837565] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.152 [2024-05-15 17:07:51.837568] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.152 [2024-05-15 17:07:51.837573] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4980) on tqpair=0x1e3cc30 00:23:13.152 [2024-05-15 17:07:51.837584] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:13.152 [2024-05-15 17:07:51.837591] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:13.152 [2024-05-15 17:07:51.837597] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:13.152 [2024-05-15 17:07:51.837608] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.152 [2024-05-15 17:07:51.837612] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.152 [2024-05-15 17:07:51.837616] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3cc30) 00:23:13.152 [2024-05-15 17:07:51.837623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-05-15 17:07:51.837635] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4980, cid 0, qid 0 00:23:13.152 [2024-05-15 17:07:51.837858] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.152 [2024-05-15 17:07:51.837864] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.152 [2024-05-15 17:07:51.837868] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.152 [2024-05-15 17:07:51.837871] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4980) on tqpair=0x1e3cc30 00:23:13.152 [2024-05-15 17:07:51.837878] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:13.152 [2024-05-15 17:07:51.837885] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:13.152 [2024-05-15 17:07:51.837891] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.152 [2024-05-15 17:07:51.837895] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.152 [2024-05-15 17:07:51.837898] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3cc30) 00:23:13.152 [2024-05-15 17:07:51.837905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.152 [2024-05-15 17:07:51.837915] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4980, cid 0, qid 0 00:23:13.152 [2024-05-15 17:07:51.838116] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.152 [2024-05-15 17:07:51.838122] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.152 [2024-05-15 17:07:51.838125] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.152 [2024-05-15 17:07:51.838129] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4980) on tqpair=0x1e3cc30 00:23:13.152 [2024-05-15 17:07:51.838135] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:13.152 [2024-05-15 17:07:51.838145] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:13.152 [2024-05-15 17:07:51.838152] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.152 [2024-05-15 17:07:51.838155] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.152 [2024-05-15 17:07:51.838159] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3cc30) 00:23:13.153 [2024-05-15 17:07:51.838165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.153 [2024-05-15 17:07:51.838175] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4980, cid 0, qid 0 00:23:13.153 [2024-05-15 17:07:51.838377] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.153 [2024-05-15 17:07:51.838383] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.153 [2024-05-15 17:07:51.838386] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.838390] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4980) on tqpair=0x1e3cc30 00:23:13.153 [2024-05-15 17:07:51.838396] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:13.153 [2024-05-15 17:07:51.838405] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.838408] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.838412] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3cc30) 00:23:13.153 [2024-05-15 17:07:51.838419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.153 [2024-05-15 17:07:51.838428] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4980, cid 0, qid 0 00:23:13.153 [2024-05-15 17:07:51.838639] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.153 [2024-05-15 17:07:51.838645] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.153 [2024-05-15 17:07:51.838649] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.838653] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4980) on tqpair=0x1e3cc30 00:23:13.153 [2024-05-15 17:07:51.838658] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:13.153 [2024-05-15 17:07:51.838663] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:13.153 [2024-05-15 17:07:51.838670] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:13.153 [2024-05-15 17:07:51.838775] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:13.153 [2024-05-15 17:07:51.838780] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:13.153 [2024-05-15 17:07:51.838788] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.838792] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.838795] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3cc30) 00:23:13.153 [2024-05-15 17:07:51.838802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.153 [2024-05-15 17:07:51.838812] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4980, cid 0, qid 0 00:23:13.153 [2024-05-15 17:07:51.839011] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.153 [2024-05-15 17:07:51.839018] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.153 [2024-05-15 17:07:51.839023] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839027] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4980) on tqpair=0x1e3cc30 00:23:13.153 [2024-05-15 17:07:51.839032] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:13.153 [2024-05-15 17:07:51.839041] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839045] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839048] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3cc30) 00:23:13.153 [2024-05-15 17:07:51.839055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.153 [2024-05-15 17:07:51.839064] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4980, cid 0, qid 0 00:23:13.153 [2024-05-15 17:07:51.839263] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.153 [2024-05-15 17:07:51.839270] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.153 [2024-05-15 17:07:51.839273] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839277] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4980) on tqpair=0x1e3cc30 00:23:13.153 [2024-05-15 17:07:51.839282] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:13.153 [2024-05-15 17:07:51.839287] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:13.153 [2024-05-15 17:07:51.839294] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:13.153 [2024-05-15 17:07:51.839301] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:13.153 [2024-05-15 17:07:51.839310] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839313] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3cc30) 00:23:13.153 [2024-05-15 17:07:51.839320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.153 [2024-05-15 17:07:51.839330] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4980, cid 0, qid 0 00:23:13.153 [2024-05-15 17:07:51.839558] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.153 [2024-05-15 17:07:51.839565] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.153 [2024-05-15 17:07:51.839569] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839573] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e3cc30): datao=0, datal=4096, cccid=0 00:23:13.153 [2024-05-15 17:07:51.839577] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea4980) on tqpair(0x1e3cc30): expected_datao=0, payload_size=4096 00:23:13.153 [2024-05-15 17:07:51.839582] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839590] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839594] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839762] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.153 [2024-05-15 17:07:51.839768] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.153 [2024-05-15 17:07:51.839772] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839775] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4980) on tqpair=0x1e3cc30 00:23:13.153 [2024-05-15 17:07:51.839783] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:13.153 [2024-05-15 17:07:51.839788] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:13.153 [2024-05-15 17:07:51.839795] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:13.153 [2024-05-15 17:07:51.839800] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:13.153 [2024-05-15 17:07:51.839805] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:13.153 [2024-05-15 17:07:51.839809] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:13.153 [2024-05-15 17:07:51.839820] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:13.153 [2024-05-15 17:07:51.839828] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839832] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.839836] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3cc30) 00:23:13.153 [2024-05-15 17:07:51.839843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:13.153 [2024-05-15 17:07:51.839853] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4980, cid 0, qid 0 00:23:13.153 [2024-05-15 17:07:51.840061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.153 [2024-05-15 17:07:51.840067] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.153 [2024-05-15 17:07:51.840070] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.840074] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4980) on tqpair=0x1e3cc30 00:23:13.153 [2024-05-15 17:07:51.840084] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.840088] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.840092] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e3cc30) 00:23:13.153 [2024-05-15 17:07:51.840098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.153 [2024-05-15 17:07:51.840104] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.840107] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.840111] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e3cc30) 00:23:13.153 [2024-05-15 17:07:51.840116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.153 [2024-05-15 17:07:51.840122] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.840126] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.840129] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e3cc30) 00:23:13.153 [2024-05-15 17:07:51.840135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.153 [2024-05-15 17:07:51.840141] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.840144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.840148] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3cc30) 00:23:13.153 [2024-05-15 17:07:51.840153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.153 [2024-05-15 17:07:51.840158] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:13.153 [2024-05-15 17:07:51.840166] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:13.153 [2024-05-15 17:07:51.840174] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.153 [2024-05-15 17:07:51.840177] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e3cc30) 00:23:13.153 [2024-05-15 17:07:51.840184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.153 [2024-05-15 17:07:51.840195] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4980, cid 0, qid 0 00:23:13.153 [2024-05-15 17:07:51.840200] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4ae0, cid 1, qid 0 00:23:13.153 [2024-05-15 17:07:51.840205] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4c40, cid 2, qid 0 00:23:13.153 [2024-05-15 17:07:51.840209] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4da0, cid 3, qid 0 00:23:13.154 [2024-05-15 17:07:51.840214] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4f00, cid 4, qid 0 00:23:13.154 [2024-05-15 17:07:51.840457] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.154 [2024-05-15 17:07:51.840463] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.154 [2024-05-15 17:07:51.840467] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.840470] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4f00) on tqpair=0x1e3cc30 00:23:13.154 [2024-05-15 17:07:51.840479] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:13.154 [2024-05-15 17:07:51.840484] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:13.154 [2024-05-15 17:07:51.840493] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.840497] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e3cc30) 00:23:13.154 [2024-05-15 17:07:51.840503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.154 [2024-05-15 17:07:51.840513] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4f00, cid 4, qid 0 00:23:13.154 [2024-05-15 17:07:51.840692] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.154 [2024-05-15 17:07:51.840699] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.154 [2024-05-15 17:07:51.840703] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.840706] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e3cc30): datao=0, datal=4096, cccid=4 00:23:13.154 [2024-05-15 17:07:51.840711] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea4f00) on tqpair(0x1e3cc30): expected_datao=0, payload_size=4096 00:23:13.154 [2024-05-15 17:07:51.840715] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.840721] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.840725] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.840878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.154 [2024-05-15 17:07:51.840884] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.154 [2024-05-15 17:07:51.840888] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.840891] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4f00) on tqpair=0x1e3cc30 00:23:13.154 [2024-05-15 17:07:51.840903] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:13.154 [2024-05-15 17:07:51.840926] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.840931] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e3cc30) 00:23:13.154 [2024-05-15 17:07:51.840937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.154 [2024-05-15 17:07:51.840946] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.840950] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.840953] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e3cc30) 00:23:13.154 [2024-05-15 17:07:51.840959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.154 [2024-05-15 17:07:51.840974] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4f00, cid 4, qid 0 00:23:13.154 [2024-05-15 17:07:51.840979] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea5060, cid 5, qid 0 00:23:13.154 [2024-05-15 17:07:51.841222] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.154 [2024-05-15 17:07:51.841228] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.154 [2024-05-15 17:07:51.841232] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.841235] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e3cc30): datao=0, datal=1024, cccid=4 00:23:13.154 [2024-05-15 17:07:51.841239] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea4f00) on tqpair(0x1e3cc30): expected_datao=0, payload_size=1024 00:23:13.154 [2024-05-15 17:07:51.841244] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.841250] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.841253] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.841259] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.154 [2024-05-15 17:07:51.841265] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.154 [2024-05-15 17:07:51.841268] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.841272] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea5060) on tqpair=0x1e3cc30 00:23:13.154 [2024-05-15 17:07:51.885555] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.154 [2024-05-15 17:07:51.885564] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.154 [2024-05-15 17:07:51.885568] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.885572] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4f00) on tqpair=0x1e3cc30 00:23:13.154 [2024-05-15 17:07:51.885584] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.885588] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e3cc30) 00:23:13.154 [2024-05-15 17:07:51.885595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.154 [2024-05-15 17:07:51.885610] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4f00, cid 4, qid 0 00:23:13.154 [2024-05-15 17:07:51.885804] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.154 [2024-05-15 17:07:51.885810] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.154 [2024-05-15 17:07:51.885814] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.885817] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e3cc30): datao=0, datal=3072, cccid=4 00:23:13.154 [2024-05-15 17:07:51.885822] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea4f00) on tqpair(0x1e3cc30): expected_datao=0, payload_size=3072 00:23:13.154 [2024-05-15 17:07:51.885826] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.885848] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.885852] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.886014] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.154 [2024-05-15 17:07:51.886021] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.154 [2024-05-15 17:07:51.886024] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.886031] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4f00) on tqpair=0x1e3cc30 00:23:13.154 [2024-05-15 17:07:51.886040] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.886044] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e3cc30) 00:23:13.154 [2024-05-15 17:07:51.886050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.154 [2024-05-15 17:07:51.886063] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4f00, cid 4, qid 0 00:23:13.154 [2024-05-15 17:07:51.886310] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.154 [2024-05-15 17:07:51.886316] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.154 [2024-05-15 17:07:51.886320] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.886323] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e3cc30): datao=0, datal=8, cccid=4 00:23:13.154 [2024-05-15 17:07:51.886328] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ea4f00) on tqpair(0x1e3cc30): expected_datao=0, payload_size=8 00:23:13.154 [2024-05-15 17:07:51.886332] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.886338] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.886341] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.926758] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.154 [2024-05-15 17:07:51.926769] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.154 [2024-05-15 17:07:51.926773] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.154 [2024-05-15 17:07:51.926777] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4f00) on tqpair=0x1e3cc30 00:23:13.154 ===================================================== 00:23:13.154 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:13.154 ===================================================== 00:23:13.154 Controller Capabilities/Features 00:23:13.154 ================================ 00:23:13.154 Vendor ID: 0000 00:23:13.154 Subsystem Vendor ID: 0000 00:23:13.154 Serial Number: .................... 00:23:13.154 Model Number: ........................................ 00:23:13.154 Firmware Version: 24.05 00:23:13.154 Recommended Arb Burst: 0 00:23:13.154 IEEE OUI Identifier: 00 00 00 00:23:13.154 Multi-path I/O 00:23:13.154 May have multiple subsystem ports: No 00:23:13.154 May have multiple controllers: No 00:23:13.154 Associated with SR-IOV VF: No 00:23:13.154 Max Data Transfer Size: 131072 00:23:13.154 Max Number of Namespaces: 0 00:23:13.154 Max Number of I/O Queues: 1024 00:23:13.154 NVMe Specification Version (VS): 1.3 00:23:13.154 NVMe Specification Version (Identify): 1.3 00:23:13.154 Maximum Queue Entries: 128 00:23:13.154 Contiguous Queues Required: Yes 00:23:13.154 Arbitration Mechanisms Supported 00:23:13.154 Weighted Round Robin: Not Supported 00:23:13.154 Vendor Specific: Not Supported 00:23:13.154 Reset Timeout: 15000 ms 00:23:13.154 Doorbell Stride: 4 bytes 00:23:13.154 NVM Subsystem Reset: Not Supported 00:23:13.154 Command Sets Supported 00:23:13.154 NVM Command Set: Supported 00:23:13.154 Boot Partition: Not Supported 00:23:13.154 Memory Page Size Minimum: 4096 bytes 00:23:13.154 Memory Page Size Maximum: 4096 bytes 00:23:13.154 Persistent Memory Region: Not Supported 00:23:13.154 Optional Asynchronous Events Supported 00:23:13.154 Namespace Attribute Notices: Not Supported 00:23:13.154 Firmware Activation Notices: Not Supported 00:23:13.154 ANA Change Notices: Not Supported 00:23:13.154 PLE Aggregate Log Change Notices: Not Supported 00:23:13.154 LBA Status Info Alert Notices: Not Supported 00:23:13.154 EGE Aggregate Log Change Notices: Not Supported 00:23:13.154 Normal NVM Subsystem Shutdown event: Not Supported 00:23:13.154 Zone Descriptor Change Notices: Not Supported 00:23:13.154 Discovery Log Change Notices: Supported 00:23:13.154 Controller Attributes 00:23:13.154 128-bit Host Identifier: Not Supported 00:23:13.154 Non-Operational Permissive Mode: Not Supported 00:23:13.155 NVM Sets: Not Supported 00:23:13.155 Read Recovery Levels: Not Supported 00:23:13.155 Endurance Groups: Not Supported 00:23:13.155 Predictable Latency Mode: Not Supported 00:23:13.155 Traffic Based Keep ALive: Not Supported 00:23:13.155 Namespace Granularity: Not Supported 00:23:13.155 SQ Associations: Not Supported 00:23:13.155 UUID List: Not Supported 00:23:13.155 Multi-Domain Subsystem: Not Supported 00:23:13.155 Fixed Capacity Management: Not Supported 00:23:13.155 Variable Capacity Management: Not Supported 00:23:13.155 Delete Endurance Group: Not Supported 00:23:13.155 Delete NVM Set: Not Supported 00:23:13.155 Extended LBA Formats Supported: Not Supported 00:23:13.155 Flexible Data Placement Supported: Not Supported 00:23:13.155 00:23:13.155 Controller Memory Buffer Support 00:23:13.155 ================================ 00:23:13.155 Supported: No 00:23:13.155 00:23:13.155 Persistent Memory Region Support 00:23:13.155 ================================ 00:23:13.155 Supported: No 00:23:13.155 00:23:13.155 Admin Command Set Attributes 00:23:13.155 ============================ 00:23:13.155 Security Send/Receive: Not Supported 00:23:13.155 Format NVM: Not Supported 00:23:13.155 Firmware Activate/Download: Not Supported 00:23:13.155 Namespace Management: Not Supported 00:23:13.155 Device Self-Test: Not Supported 00:23:13.155 Directives: Not Supported 00:23:13.155 NVMe-MI: Not Supported 00:23:13.155 Virtualization Management: Not Supported 00:23:13.155 Doorbell Buffer Config: Not Supported 00:23:13.155 Get LBA Status Capability: Not Supported 00:23:13.155 Command & Feature Lockdown Capability: Not Supported 00:23:13.155 Abort Command Limit: 1 00:23:13.155 Async Event Request Limit: 4 00:23:13.155 Number of Firmware Slots: N/A 00:23:13.155 Firmware Slot 1 Read-Only: N/A 00:23:13.155 Firmware Activation Without Reset: N/A 00:23:13.155 Multiple Update Detection Support: N/A 00:23:13.155 Firmware Update Granularity: No Information Provided 00:23:13.155 Per-Namespace SMART Log: No 00:23:13.155 Asymmetric Namespace Access Log Page: Not Supported 00:23:13.155 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:13.155 Command Effects Log Page: Not Supported 00:23:13.155 Get Log Page Extended Data: Supported 00:23:13.155 Telemetry Log Pages: Not Supported 00:23:13.155 Persistent Event Log Pages: Not Supported 00:23:13.155 Supported Log Pages Log Page: May Support 00:23:13.155 Commands Supported & Effects Log Page: Not Supported 00:23:13.155 Feature Identifiers & Effects Log Page:May Support 00:23:13.155 NVMe-MI Commands & Effects Log Page: May Support 00:23:13.155 Data Area 4 for Telemetry Log: Not Supported 00:23:13.155 Error Log Page Entries Supported: 128 00:23:13.155 Keep Alive: Not Supported 00:23:13.155 00:23:13.155 NVM Command Set Attributes 00:23:13.155 ========================== 00:23:13.155 Submission Queue Entry Size 00:23:13.155 Max: 1 00:23:13.155 Min: 1 00:23:13.155 Completion Queue Entry Size 00:23:13.155 Max: 1 00:23:13.155 Min: 1 00:23:13.155 Number of Namespaces: 0 00:23:13.155 Compare Command: Not Supported 00:23:13.155 Write Uncorrectable Command: Not Supported 00:23:13.155 Dataset Management Command: Not Supported 00:23:13.155 Write Zeroes Command: Not Supported 00:23:13.155 Set Features Save Field: Not Supported 00:23:13.155 Reservations: Not Supported 00:23:13.155 Timestamp: Not Supported 00:23:13.155 Copy: Not Supported 00:23:13.155 Volatile Write Cache: Not Present 00:23:13.155 Atomic Write Unit (Normal): 1 00:23:13.155 Atomic Write Unit (PFail): 1 00:23:13.155 Atomic Compare & Write Unit: 1 00:23:13.155 Fused Compare & Write: Supported 00:23:13.155 Scatter-Gather List 00:23:13.155 SGL Command Set: Supported 00:23:13.155 SGL Keyed: Supported 00:23:13.155 SGL Bit Bucket Descriptor: Not Supported 00:23:13.155 SGL Metadata Pointer: Not Supported 00:23:13.155 Oversized SGL: Not Supported 00:23:13.155 SGL Metadata Address: Not Supported 00:23:13.155 SGL Offset: Supported 00:23:13.155 Transport SGL Data Block: Not Supported 00:23:13.155 Replay Protected Memory Block: Not Supported 00:23:13.155 00:23:13.155 Firmware Slot Information 00:23:13.155 ========================= 00:23:13.155 Active slot: 0 00:23:13.155 00:23:13.155 00:23:13.155 Error Log 00:23:13.155 ========= 00:23:13.155 00:23:13.155 Active Namespaces 00:23:13.155 ================= 00:23:13.155 Discovery Log Page 00:23:13.155 ================== 00:23:13.155 Generation Counter: 2 00:23:13.155 Number of Records: 2 00:23:13.155 Record Format: 0 00:23:13.155 00:23:13.155 Discovery Log Entry 0 00:23:13.155 ---------------------- 00:23:13.155 Transport Type: 3 (TCP) 00:23:13.155 Address Family: 1 (IPv4) 00:23:13.155 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:13.155 Entry Flags: 00:23:13.155 Duplicate Returned Information: 1 00:23:13.155 Explicit Persistent Connection Support for Discovery: 1 00:23:13.155 Transport Requirements: 00:23:13.155 Secure Channel: Not Required 00:23:13.155 Port ID: 0 (0x0000) 00:23:13.155 Controller ID: 65535 (0xffff) 00:23:13.155 Admin Max SQ Size: 128 00:23:13.155 Transport Service Identifier: 4420 00:23:13.155 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:13.155 Transport Address: 10.0.0.2 00:23:13.155 Discovery Log Entry 1 00:23:13.155 ---------------------- 00:23:13.155 Transport Type: 3 (TCP) 00:23:13.155 Address Family: 1 (IPv4) 00:23:13.155 Subsystem Type: 2 (NVM Subsystem) 00:23:13.155 Entry Flags: 00:23:13.155 Duplicate Returned Information: 0 00:23:13.155 Explicit Persistent Connection Support for Discovery: 0 00:23:13.155 Transport Requirements: 00:23:13.155 Secure Channel: Not Required 00:23:13.155 Port ID: 0 (0x0000) 00:23:13.155 Controller ID: 65535 (0xffff) 00:23:13.155 Admin Max SQ Size: 128 00:23:13.155 Transport Service Identifier: 4420 00:23:13.155 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:13.155 Transport Address: 10.0.0.2 [2024-05-15 17:07:51.926862] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:13.155 [2024-05-15 17:07:51.926875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-05-15 17:07:51.926882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-05-15 17:07:51.926888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-05-15 17:07:51.926894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.155 [2024-05-15 17:07:51.926902] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.155 [2024-05-15 17:07:51.926906] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.155 [2024-05-15 17:07:51.926910] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3cc30) 00:23:13.155 [2024-05-15 17:07:51.926917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.155 [2024-05-15 17:07:51.926930] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4da0, cid 3, qid 0 00:23:13.155 [2024-05-15 17:07:51.927201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.155 [2024-05-15 17:07:51.927207] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.155 [2024-05-15 17:07:51.927211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.155 [2024-05-15 17:07:51.927214] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4da0) on tqpair=0x1e3cc30 00:23:13.155 [2024-05-15 17:07:51.927222] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.155 [2024-05-15 17:07:51.927226] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.155 [2024-05-15 17:07:51.927229] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3cc30) 00:23:13.155 [2024-05-15 17:07:51.927238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.155 [2024-05-15 17:07:51.927250] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4da0, cid 3, qid 0 00:23:13.155 [2024-05-15 17:07:51.927434] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.155 [2024-05-15 17:07:51.927440] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.155 [2024-05-15 17:07:51.927443] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.155 [2024-05-15 17:07:51.927447] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4da0) on tqpair=0x1e3cc30 00:23:13.155 [2024-05-15 17:07:51.927453] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:13.155 [2024-05-15 17:07:51.927457] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:13.155 [2024-05-15 17:07:51.927466] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.155 [2024-05-15 17:07:51.927470] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.155 [2024-05-15 17:07:51.927473] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3cc30) 00:23:13.155 [2024-05-15 17:07:51.927480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.155 [2024-05-15 17:07:51.927490] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4da0, cid 3, qid 0 00:23:13.155 [2024-05-15 17:07:51.927684] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.155 [2024-05-15 17:07:51.927691] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.156 [2024-05-15 17:07:51.927694] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.927698] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4da0) on tqpair=0x1e3cc30 00:23:13.156 [2024-05-15 17:07:51.927709] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.927712] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.927716] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3cc30) 00:23:13.156 [2024-05-15 17:07:51.927722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-05-15 17:07:51.927732] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4da0, cid 3, qid 0 00:23:13.156 [2024-05-15 17:07:51.927907] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.156 [2024-05-15 17:07:51.927913] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.156 [2024-05-15 17:07:51.927917] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.927920] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4da0) on tqpair=0x1e3cc30 00:23:13.156 [2024-05-15 17:07:51.927930] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.927934] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.927938] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3cc30) 00:23:13.156 [2024-05-15 17:07:51.927944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-05-15 17:07:51.927954] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4da0, cid 3, qid 0 00:23:13.156 [2024-05-15 17:07:51.928137] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.156 [2024-05-15 17:07:51.928143] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.156 [2024-05-15 17:07:51.928147] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928151] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4da0) on tqpair=0x1e3cc30 00:23:13.156 [2024-05-15 17:07:51.928161] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928167] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928170] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3cc30) 00:23:13.156 [2024-05-15 17:07:51.928177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-05-15 17:07:51.928187] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4da0, cid 3, qid 0 00:23:13.156 [2024-05-15 17:07:51.928367] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.156 [2024-05-15 17:07:51.928373] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.156 [2024-05-15 17:07:51.928376] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928380] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4da0) on tqpair=0x1e3cc30 00:23:13.156 [2024-05-15 17:07:51.928390] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928394] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3cc30) 00:23:13.156 [2024-05-15 17:07:51.928404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-05-15 17:07:51.928413] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4da0, cid 3, qid 0 00:23:13.156 [2024-05-15 17:07:51.928604] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.156 [2024-05-15 17:07:51.928610] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.156 [2024-05-15 17:07:51.928614] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928617] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4da0) on tqpair=0x1e3cc30 00:23:13.156 [2024-05-15 17:07:51.928628] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928631] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928635] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3cc30) 00:23:13.156 [2024-05-15 17:07:51.928641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-05-15 17:07:51.928651] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4da0, cid 3, qid 0 00:23:13.156 [2024-05-15 17:07:51.928813] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.156 [2024-05-15 17:07:51.928819] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.156 [2024-05-15 17:07:51.928823] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928827] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4da0) on tqpair=0x1e3cc30 00:23:13.156 [2024-05-15 17:07:51.928837] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928840] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.928844] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3cc30) 00:23:13.156 [2024-05-15 17:07:51.928850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-05-15 17:07:51.928860] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4da0, cid 3, qid 0 00:23:13.156 [2024-05-15 17:07:51.932553] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.156 [2024-05-15 17:07:51.932563] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.156 [2024-05-15 17:07:51.932567] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.932571] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4da0) on tqpair=0x1e3cc30 00:23:13.156 [2024-05-15 17:07:51.932582] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.932585] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.932592] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e3cc30) 00:23:13.156 [2024-05-15 17:07:51.932599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.156 [2024-05-15 17:07:51.932612] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ea4da0, cid 3, qid 0 00:23:13.156 [2024-05-15 17:07:51.932793] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.156 [2024-05-15 17:07:51.932800] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.156 [2024-05-15 17:07:51.932803] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.156 [2024-05-15 17:07:51.932807] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ea4da0) on tqpair=0x1e3cc30 00:23:13.156 [2024-05-15 17:07:51.932815] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:23:13.156 00:23:13.156 17:07:51 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:13.156 [2024-05-15 17:07:51.970309] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:23:13.156 [2024-05-15 17:07:51.970350] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553883 ] 00:23:13.156 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.422 [2024-05-15 17:07:52.003119] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:13.422 [2024-05-15 17:07:52.003158] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:13.422 [2024-05-15 17:07:52.003163] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:13.422 [2024-05-15 17:07:52.003174] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:13.422 [2024-05-15 17:07:52.003181] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:13.422 [2024-05-15 17:07:52.006582] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:13.422 [2024-05-15 17:07:52.006605] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe24c30 0 00:23:13.422 [2024-05-15 17:07:52.014557] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:13.422 [2024-05-15 17:07:52.014570] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:13.422 [2024-05-15 17:07:52.014574] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:13.422 [2024-05-15 17:07:52.014578] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:13.422 [2024-05-15 17:07:52.014609] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.014614] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.014618] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe24c30) 00:23:13.422 [2024-05-15 17:07:52.014631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:13.422 [2024-05-15 17:07:52.014645] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8c980, cid 0, qid 0 00:23:13.422 [2024-05-15 17:07:52.022556] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.422 [2024-05-15 17:07:52.022565] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.422 [2024-05-15 17:07:52.022568] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.022573] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8c980) on tqpair=0xe24c30 00:23:13.422 [2024-05-15 17:07:52.022585] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:13.422 [2024-05-15 17:07:52.022591] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:13.422 [2024-05-15 17:07:52.022596] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:13.422 [2024-05-15 17:07:52.022606] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.022610] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.022613] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe24c30) 00:23:13.422 [2024-05-15 17:07:52.022621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.422 [2024-05-15 17:07:52.022633] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8c980, cid 0, qid 0 00:23:13.422 [2024-05-15 17:07:52.022840] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.422 [2024-05-15 17:07:52.022846] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.422 [2024-05-15 17:07:52.022850] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.022854] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8c980) on tqpair=0xe24c30 00:23:13.422 [2024-05-15 17:07:52.022858] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:13.422 [2024-05-15 17:07:52.022866] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:13.422 [2024-05-15 17:07:52.022872] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.022876] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.022879] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe24c30) 00:23:13.422 [2024-05-15 17:07:52.022886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.422 [2024-05-15 17:07:52.022896] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8c980, cid 0, qid 0 00:23:13.422 [2024-05-15 17:07:52.023090] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.422 [2024-05-15 17:07:52.023096] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.422 [2024-05-15 17:07:52.023100] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.023104] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8c980) on tqpair=0xe24c30 00:23:13.422 [2024-05-15 17:07:52.023108] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:13.422 [2024-05-15 17:07:52.023117] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:13.422 [2024-05-15 17:07:52.023124] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.023127] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.023131] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe24c30) 00:23:13.422 [2024-05-15 17:07:52.023137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.422 [2024-05-15 17:07:52.023147] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8c980, cid 0, qid 0 00:23:13.422 [2024-05-15 17:07:52.023347] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.422 [2024-05-15 17:07:52.023353] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.422 [2024-05-15 17:07:52.023356] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.023360] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8c980) on tqpair=0xe24c30 00:23:13.422 [2024-05-15 17:07:52.023367] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:13.422 [2024-05-15 17:07:52.023377] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.023380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.023384] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe24c30) 00:23:13.422 [2024-05-15 17:07:52.023390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.422 [2024-05-15 17:07:52.023400] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8c980, cid 0, qid 0 00:23:13.422 [2024-05-15 17:07:52.023568] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.422 [2024-05-15 17:07:52.023575] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.422 [2024-05-15 17:07:52.023578] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.023582] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8c980) on tqpair=0xe24c30 00:23:13.422 [2024-05-15 17:07:52.023586] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:13.422 [2024-05-15 17:07:52.023591] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:13.422 [2024-05-15 17:07:52.023598] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:13.422 [2024-05-15 17:07:52.023703] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:13.422 [2024-05-15 17:07:52.023707] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:13.422 [2024-05-15 17:07:52.023714] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.422 [2024-05-15 17:07:52.023718] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.023722] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe24c30) 00:23:13.423 [2024-05-15 17:07:52.023728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.423 [2024-05-15 17:07:52.023738] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8c980, cid 0, qid 0 00:23:13.423 [2024-05-15 17:07:52.023917] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.423 [2024-05-15 17:07:52.023924] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.423 [2024-05-15 17:07:52.023927] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.023931] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8c980) on tqpair=0xe24c30 00:23:13.423 [2024-05-15 17:07:52.023935] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:13.423 [2024-05-15 17:07:52.023944] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.023948] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.023952] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe24c30) 00:23:13.423 [2024-05-15 17:07:52.023958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.423 [2024-05-15 17:07:52.023968] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8c980, cid 0, qid 0 00:23:13.423 [2024-05-15 17:07:52.024114] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.423 [2024-05-15 17:07:52.024120] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.423 [2024-05-15 17:07:52.024124] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024127] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8c980) on tqpair=0xe24c30 00:23:13.423 [2024-05-15 17:07:52.024134] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:13.423 [2024-05-15 17:07:52.024138] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:13.423 [2024-05-15 17:07:52.024146] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:13.423 [2024-05-15 17:07:52.024153] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:13.423 [2024-05-15 17:07:52.024161] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024165] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe24c30) 00:23:13.423 [2024-05-15 17:07:52.024172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.423 [2024-05-15 17:07:52.024181] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8c980, cid 0, qid 0 00:23:13.423 [2024-05-15 17:07:52.024366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.423 [2024-05-15 17:07:52.024372] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.423 [2024-05-15 17:07:52.024376] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024379] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe24c30): datao=0, datal=4096, cccid=0 00:23:13.423 [2024-05-15 17:07:52.024384] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8c980) on tqpair(0xe24c30): expected_datao=0, payload_size=4096 00:23:13.423 [2024-05-15 17:07:52.024388] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024396] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024399] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024551] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.423 [2024-05-15 17:07:52.024558] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.423 [2024-05-15 17:07:52.024561] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024565] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8c980) on tqpair=0xe24c30 00:23:13.423 [2024-05-15 17:07:52.024572] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:13.423 [2024-05-15 17:07:52.024577] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:13.423 [2024-05-15 17:07:52.024581] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:13.423 [2024-05-15 17:07:52.024585] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:13.423 [2024-05-15 17:07:52.024589] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:13.423 [2024-05-15 17:07:52.024594] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:13.423 [2024-05-15 17:07:52.024604] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:13.423 [2024-05-15 17:07:52.024612] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024616] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024619] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe24c30) 00:23:13.423 [2024-05-15 17:07:52.024626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:13.423 [2024-05-15 17:07:52.024637] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8c980, cid 0, qid 0 00:23:13.423 [2024-05-15 17:07:52.024810] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.423 [2024-05-15 17:07:52.024816] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.423 [2024-05-15 17:07:52.024819] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024823] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8c980) on tqpair=0xe24c30 00:23:13.423 [2024-05-15 17:07:52.024831] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024835] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024839] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe24c30) 00:23:13.423 [2024-05-15 17:07:52.024845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.423 [2024-05-15 17:07:52.024851] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024854] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024858] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe24c30) 00:23:13.423 [2024-05-15 17:07:52.024863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.423 [2024-05-15 17:07:52.024870] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024873] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024877] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe24c30) 00:23:13.423 [2024-05-15 17:07:52.024882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.423 [2024-05-15 17:07:52.024888] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024892] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024895] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe24c30) 00:23:13.423 [2024-05-15 17:07:52.024901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.423 [2024-05-15 17:07:52.024906] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:13.423 [2024-05-15 17:07:52.024913] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:13.423 [2024-05-15 17:07:52.024919] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.024923] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe24c30) 00:23:13.423 [2024-05-15 17:07:52.024930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.423 [2024-05-15 17:07:52.024941] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8c980, cid 0, qid 0 00:23:13.423 [2024-05-15 17:07:52.024946] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cae0, cid 1, qid 0 00:23:13.423 [2024-05-15 17:07:52.024951] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cc40, cid 2, qid 0 00:23:13.423 [2024-05-15 17:07:52.024956] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cda0, cid 3, qid 0 00:23:13.423 [2024-05-15 17:07:52.024960] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cf00, cid 4, qid 0 00:23:13.423 [2024-05-15 17:07:52.025191] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.423 [2024-05-15 17:07:52.025198] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.423 [2024-05-15 17:07:52.025201] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.025205] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cf00) on tqpair=0xe24c30 00:23:13.423 [2024-05-15 17:07:52.025213] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:13.423 [2024-05-15 17:07:52.025218] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:13.423 [2024-05-15 17:07:52.025226] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:13.423 [2024-05-15 17:07:52.025232] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:13.423 [2024-05-15 17:07:52.025238] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.025242] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.025245] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe24c30) 00:23:13.423 [2024-05-15 17:07:52.025252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:13.423 [2024-05-15 17:07:52.025262] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cf00, cid 4, qid 0 00:23:13.423 [2024-05-15 17:07:52.025454] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.423 [2024-05-15 17:07:52.025460] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.423 [2024-05-15 17:07:52.025464] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.423 [2024-05-15 17:07:52.025467] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cf00) on tqpair=0xe24c30 00:23:13.424 [2024-05-15 17:07:52.025518] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.025528] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.025535] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.025539] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe24c30) 00:23:13.424 [2024-05-15 17:07:52.025550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.424 [2024-05-15 17:07:52.025561] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cf00, cid 4, qid 0 00:23:13.424 [2024-05-15 17:07:52.025726] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.424 [2024-05-15 17:07:52.025733] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.424 [2024-05-15 17:07:52.025736] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.025740] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe24c30): datao=0, datal=4096, cccid=4 00:23:13.424 [2024-05-15 17:07:52.025744] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8cf00) on tqpair(0xe24c30): expected_datao=0, payload_size=4096 00:23:13.424 [2024-05-15 17:07:52.025748] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.025803] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.025807] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.025970] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.424 [2024-05-15 17:07:52.025976] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.424 [2024-05-15 17:07:52.025979] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.025983] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cf00) on tqpair=0xe24c30 00:23:13.424 [2024-05-15 17:07:52.025994] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:13.424 [2024-05-15 17:07:52.026007] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.026019] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.026026] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.026029] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe24c30) 00:23:13.424 [2024-05-15 17:07:52.026036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.424 [2024-05-15 17:07:52.026046] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cf00, cid 4, qid 0 00:23:13.424 [2024-05-15 17:07:52.026278] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.424 [2024-05-15 17:07:52.026284] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.424 [2024-05-15 17:07:52.026288] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.026291] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe24c30): datao=0, datal=4096, cccid=4 00:23:13.424 [2024-05-15 17:07:52.026296] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8cf00) on tqpair(0xe24c30): expected_datao=0, payload_size=4096 00:23:13.424 [2024-05-15 17:07:52.026300] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.026306] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.026310] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.026460] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.424 [2024-05-15 17:07:52.026466] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.424 [2024-05-15 17:07:52.026469] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.026473] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cf00) on tqpair=0xe24c30 00:23:13.424 [2024-05-15 17:07:52.026483] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.026491] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.026498] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.026502] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe24c30) 00:23:13.424 [2024-05-15 17:07:52.026508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.424 [2024-05-15 17:07:52.026518] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cf00, cid 4, qid 0 00:23:13.424 [2024-05-15 17:07:52.030554] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.424 [2024-05-15 17:07:52.030561] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.424 [2024-05-15 17:07:52.030565] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.030568] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe24c30): datao=0, datal=4096, cccid=4 00:23:13.424 [2024-05-15 17:07:52.030573] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8cf00) on tqpair(0xe24c30): expected_datao=0, payload_size=4096 00:23:13.424 [2024-05-15 17:07:52.030577] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.030583] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.030587] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.030592] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.424 [2024-05-15 17:07:52.030598] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.424 [2024-05-15 17:07:52.030602] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.030605] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cf00) on tqpair=0xe24c30 00:23:13.424 [2024-05-15 17:07:52.030617] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.030625] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.030633] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.030639] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.030644] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.030649] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:13.424 [2024-05-15 17:07:52.030653] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:13.424 [2024-05-15 17:07:52.030658] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:13.424 [2024-05-15 17:07:52.030674] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.030678] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe24c30) 00:23:13.424 [2024-05-15 17:07:52.030685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.424 [2024-05-15 17:07:52.030691] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.030695] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.030699] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe24c30) 00:23:13.424 [2024-05-15 17:07:52.030705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.424 [2024-05-15 17:07:52.030718] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cf00, cid 4, qid 0 00:23:13.424 [2024-05-15 17:07:52.030724] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8d060, cid 5, qid 0 00:23:13.424 [2024-05-15 17:07:52.030927] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.424 [2024-05-15 17:07:52.030934] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.424 [2024-05-15 17:07:52.030937] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.030941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cf00) on tqpair=0xe24c30 00:23:13.424 [2024-05-15 17:07:52.030948] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.424 [2024-05-15 17:07:52.030953] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.424 [2024-05-15 17:07:52.030957] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.030960] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8d060) on tqpair=0xe24c30 00:23:13.424 [2024-05-15 17:07:52.030969] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.030973] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe24c30) 00:23:13.424 [2024-05-15 17:07:52.030979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.424 [2024-05-15 17:07:52.030989] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8d060, cid 5, qid 0 00:23:13.424 [2024-05-15 17:07:52.031188] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.424 [2024-05-15 17:07:52.031194] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.424 [2024-05-15 17:07:52.031197] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.031201] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8d060) on tqpair=0xe24c30 00:23:13.424 [2024-05-15 17:07:52.031212] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.031216] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe24c30) 00:23:13.424 [2024-05-15 17:07:52.031222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.424 [2024-05-15 17:07:52.031231] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8d060, cid 5, qid 0 00:23:13.424 [2024-05-15 17:07:52.031405] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.424 [2024-05-15 17:07:52.031412] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.424 [2024-05-15 17:07:52.031415] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.031419] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8d060) on tqpair=0xe24c30 00:23:13.424 [2024-05-15 17:07:52.031428] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.424 [2024-05-15 17:07:52.031431] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe24c30) 00:23:13.424 [2024-05-15 17:07:52.031438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.425 [2024-05-15 17:07:52.031447] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8d060, cid 5, qid 0 00:23:13.425 [2024-05-15 17:07:52.031655] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.425 [2024-05-15 17:07:52.031661] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.425 [2024-05-15 17:07:52.031665] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.031669] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8d060) on tqpair=0xe24c30 00:23:13.425 [2024-05-15 17:07:52.031679] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.031683] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe24c30) 00:23:13.425 [2024-05-15 17:07:52.031689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.425 [2024-05-15 17:07:52.031696] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.031700] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe24c30) 00:23:13.425 [2024-05-15 17:07:52.031706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.425 [2024-05-15 17:07:52.031713] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.031717] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xe24c30) 00:23:13.425 [2024-05-15 17:07:52.031723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.425 [2024-05-15 17:07:52.031732] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.031736] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe24c30) 00:23:13.425 [2024-05-15 17:07:52.031742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.425 [2024-05-15 17:07:52.031753] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8d060, cid 5, qid 0 00:23:13.425 [2024-05-15 17:07:52.031758] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cf00, cid 4, qid 0 00:23:13.425 [2024-05-15 17:07:52.031763] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8d1c0, cid 6, qid 0 00:23:13.425 [2024-05-15 17:07:52.031768] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8d320, cid 7, qid 0 00:23:13.425 [2024-05-15 17:07:52.032002] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.425 [2024-05-15 17:07:52.032009] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.425 [2024-05-15 17:07:52.032012] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032016] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe24c30): datao=0, datal=8192, cccid=5 00:23:13.425 [2024-05-15 17:07:52.032020] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8d060) on tqpair(0xe24c30): expected_datao=0, payload_size=8192 00:23:13.425 [2024-05-15 17:07:52.032024] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032101] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032105] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032111] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.425 [2024-05-15 17:07:52.032117] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.425 [2024-05-15 17:07:52.032120] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032124] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe24c30): datao=0, datal=512, cccid=4 00:23:13.425 [2024-05-15 17:07:52.032128] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8cf00) on tqpair(0xe24c30): expected_datao=0, payload_size=512 00:23:13.425 [2024-05-15 17:07:52.032132] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032139] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032142] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032148] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.425 [2024-05-15 17:07:52.032153] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.425 [2024-05-15 17:07:52.032157] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032160] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe24c30): datao=0, datal=512, cccid=6 00:23:13.425 [2024-05-15 17:07:52.032164] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8d1c0) on tqpair(0xe24c30): expected_datao=0, payload_size=512 00:23:13.425 [2024-05-15 17:07:52.032169] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032175] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032178] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:13.425 [2024-05-15 17:07:52.032190] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:13.425 [2024-05-15 17:07:52.032193] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032197] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe24c30): datao=0, datal=4096, cccid=7 00:23:13.425 [2024-05-15 17:07:52.032201] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe8d320) on tqpair(0xe24c30): expected_datao=0, payload_size=4096 00:23:13.425 [2024-05-15 17:07:52.032205] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032211] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032215] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.425 [2024-05-15 17:07:52.032228] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.425 [2024-05-15 17:07:52.032232] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032235] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8d060) on tqpair=0xe24c30 00:23:13.425 [2024-05-15 17:07:52.032247] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.425 [2024-05-15 17:07:52.032253] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.425 [2024-05-15 17:07:52.032256] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032261] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cf00) on tqpair=0xe24c30 00:23:13.425 [2024-05-15 17:07:52.032270] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.425 [2024-05-15 17:07:52.032275] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.425 [2024-05-15 17:07:52.032279] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032283] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8d1c0) on tqpair=0xe24c30 00:23:13.425 [2024-05-15 17:07:52.032291] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.425 [2024-05-15 17:07:52.032297] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.425 [2024-05-15 17:07:52.032300] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.425 [2024-05-15 17:07:52.032304] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8d320) on tqpair=0xe24c30 00:23:13.425 ===================================================== 00:23:13.425 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.425 ===================================================== 00:23:13.425 Controller Capabilities/Features 00:23:13.425 ================================ 00:23:13.425 Vendor ID: 8086 00:23:13.425 Subsystem Vendor ID: 8086 00:23:13.425 Serial Number: SPDK00000000000001 00:23:13.425 Model Number: SPDK bdev Controller 00:23:13.425 Firmware Version: 24.05 00:23:13.425 Recommended Arb Burst: 6 00:23:13.425 IEEE OUI Identifier: e4 d2 5c 00:23:13.425 Multi-path I/O 00:23:13.425 May have multiple subsystem ports: Yes 00:23:13.425 May have multiple controllers: Yes 00:23:13.425 Associated with SR-IOV VF: No 00:23:13.425 Max Data Transfer Size: 131072 00:23:13.425 Max Number of Namespaces: 32 00:23:13.425 Max Number of I/O Queues: 127 00:23:13.425 NVMe Specification Version (VS): 1.3 00:23:13.425 NVMe Specification Version (Identify): 1.3 00:23:13.425 Maximum Queue Entries: 128 00:23:13.425 Contiguous Queues Required: Yes 00:23:13.425 Arbitration Mechanisms Supported 00:23:13.425 Weighted Round Robin: Not Supported 00:23:13.425 Vendor Specific: Not Supported 00:23:13.425 Reset Timeout: 15000 ms 00:23:13.425 Doorbell Stride: 4 bytes 00:23:13.425 NVM Subsystem Reset: Not Supported 00:23:13.425 Command Sets Supported 00:23:13.425 NVM Command Set: Supported 00:23:13.425 Boot Partition: Not Supported 00:23:13.425 Memory Page Size Minimum: 4096 bytes 00:23:13.425 Memory Page Size Maximum: 4096 bytes 00:23:13.425 Persistent Memory Region: Not Supported 00:23:13.425 Optional Asynchronous Events Supported 00:23:13.425 Namespace Attribute Notices: Supported 00:23:13.425 Firmware Activation Notices: Not Supported 00:23:13.425 ANA Change Notices: Not Supported 00:23:13.425 PLE Aggregate Log Change Notices: Not Supported 00:23:13.425 LBA Status Info Alert Notices: Not Supported 00:23:13.425 EGE Aggregate Log Change Notices: Not Supported 00:23:13.425 Normal NVM Subsystem Shutdown event: Not Supported 00:23:13.425 Zone Descriptor Change Notices: Not Supported 00:23:13.425 Discovery Log Change Notices: Not Supported 00:23:13.425 Controller Attributes 00:23:13.425 128-bit Host Identifier: Supported 00:23:13.425 Non-Operational Permissive Mode: Not Supported 00:23:13.425 NVM Sets: Not Supported 00:23:13.425 Read Recovery Levels: Not Supported 00:23:13.425 Endurance Groups: Not Supported 00:23:13.425 Predictable Latency Mode: Not Supported 00:23:13.425 Traffic Based Keep ALive: Not Supported 00:23:13.425 Namespace Granularity: Not Supported 00:23:13.425 SQ Associations: Not Supported 00:23:13.425 UUID List: Not Supported 00:23:13.425 Multi-Domain Subsystem: Not Supported 00:23:13.425 Fixed Capacity Management: Not Supported 00:23:13.425 Variable Capacity Management: Not Supported 00:23:13.425 Delete Endurance Group: Not Supported 00:23:13.425 Delete NVM Set: Not Supported 00:23:13.425 Extended LBA Formats Supported: Not Supported 00:23:13.425 Flexible Data Placement Supported: Not Supported 00:23:13.425 00:23:13.426 Controller Memory Buffer Support 00:23:13.426 ================================ 00:23:13.426 Supported: No 00:23:13.426 00:23:13.426 Persistent Memory Region Support 00:23:13.426 ================================ 00:23:13.426 Supported: No 00:23:13.426 00:23:13.426 Admin Command Set Attributes 00:23:13.426 ============================ 00:23:13.426 Security Send/Receive: Not Supported 00:23:13.426 Format NVM: Not Supported 00:23:13.426 Firmware Activate/Download: Not Supported 00:23:13.426 Namespace Management: Not Supported 00:23:13.426 Device Self-Test: Not Supported 00:23:13.426 Directives: Not Supported 00:23:13.426 NVMe-MI: Not Supported 00:23:13.426 Virtualization Management: Not Supported 00:23:13.426 Doorbell Buffer Config: Not Supported 00:23:13.426 Get LBA Status Capability: Not Supported 00:23:13.426 Command & Feature Lockdown Capability: Not Supported 00:23:13.426 Abort Command Limit: 4 00:23:13.426 Async Event Request Limit: 4 00:23:13.426 Number of Firmware Slots: N/A 00:23:13.426 Firmware Slot 1 Read-Only: N/A 00:23:13.426 Firmware Activation Without Reset: N/A 00:23:13.426 Multiple Update Detection Support: N/A 00:23:13.426 Firmware Update Granularity: No Information Provided 00:23:13.426 Per-Namespace SMART Log: No 00:23:13.426 Asymmetric Namespace Access Log Page: Not Supported 00:23:13.426 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:13.426 Command Effects Log Page: Supported 00:23:13.426 Get Log Page Extended Data: Supported 00:23:13.426 Telemetry Log Pages: Not Supported 00:23:13.426 Persistent Event Log Pages: Not Supported 00:23:13.426 Supported Log Pages Log Page: May Support 00:23:13.426 Commands Supported & Effects Log Page: Not Supported 00:23:13.426 Feature Identifiers & Effects Log Page:May Support 00:23:13.426 NVMe-MI Commands & Effects Log Page: May Support 00:23:13.426 Data Area 4 for Telemetry Log: Not Supported 00:23:13.426 Error Log Page Entries Supported: 128 00:23:13.426 Keep Alive: Supported 00:23:13.426 Keep Alive Granularity: 10000 ms 00:23:13.426 00:23:13.426 NVM Command Set Attributes 00:23:13.426 ========================== 00:23:13.426 Submission Queue Entry Size 00:23:13.426 Max: 64 00:23:13.426 Min: 64 00:23:13.426 Completion Queue Entry Size 00:23:13.426 Max: 16 00:23:13.426 Min: 16 00:23:13.426 Number of Namespaces: 32 00:23:13.426 Compare Command: Supported 00:23:13.426 Write Uncorrectable Command: Not Supported 00:23:13.426 Dataset Management Command: Supported 00:23:13.426 Write Zeroes Command: Supported 00:23:13.426 Set Features Save Field: Not Supported 00:23:13.426 Reservations: Supported 00:23:13.426 Timestamp: Not Supported 00:23:13.426 Copy: Supported 00:23:13.426 Volatile Write Cache: Present 00:23:13.426 Atomic Write Unit (Normal): 1 00:23:13.426 Atomic Write Unit (PFail): 1 00:23:13.426 Atomic Compare & Write Unit: 1 00:23:13.426 Fused Compare & Write: Supported 00:23:13.426 Scatter-Gather List 00:23:13.426 SGL Command Set: Supported 00:23:13.426 SGL Keyed: Supported 00:23:13.426 SGL Bit Bucket Descriptor: Not Supported 00:23:13.426 SGL Metadata Pointer: Not Supported 00:23:13.426 Oversized SGL: Not Supported 00:23:13.426 SGL Metadata Address: Not Supported 00:23:13.426 SGL Offset: Supported 00:23:13.426 Transport SGL Data Block: Not Supported 00:23:13.426 Replay Protected Memory Block: Not Supported 00:23:13.426 00:23:13.426 Firmware Slot Information 00:23:13.426 ========================= 00:23:13.426 Active slot: 1 00:23:13.426 Slot 1 Firmware Revision: 24.05 00:23:13.426 00:23:13.426 00:23:13.426 Commands Supported and Effects 00:23:13.426 ============================== 00:23:13.426 Admin Commands 00:23:13.426 -------------- 00:23:13.426 Get Log Page (02h): Supported 00:23:13.426 Identify (06h): Supported 00:23:13.426 Abort (08h): Supported 00:23:13.426 Set Features (09h): Supported 00:23:13.426 Get Features (0Ah): Supported 00:23:13.426 Asynchronous Event Request (0Ch): Supported 00:23:13.426 Keep Alive (18h): Supported 00:23:13.426 I/O Commands 00:23:13.426 ------------ 00:23:13.426 Flush (00h): Supported LBA-Change 00:23:13.426 Write (01h): Supported LBA-Change 00:23:13.426 Read (02h): Supported 00:23:13.426 Compare (05h): Supported 00:23:13.426 Write Zeroes (08h): Supported LBA-Change 00:23:13.426 Dataset Management (09h): Supported LBA-Change 00:23:13.426 Copy (19h): Supported LBA-Change 00:23:13.426 Unknown (79h): Supported LBA-Change 00:23:13.426 Unknown (7Ah): Supported 00:23:13.426 00:23:13.426 Error Log 00:23:13.426 ========= 00:23:13.426 00:23:13.426 Arbitration 00:23:13.426 =========== 00:23:13.426 Arbitration Burst: 1 00:23:13.426 00:23:13.426 Power Management 00:23:13.426 ================ 00:23:13.426 Number of Power States: 1 00:23:13.426 Current Power State: Power State #0 00:23:13.426 Power State #0: 00:23:13.426 Max Power: 0.00 W 00:23:13.426 Non-Operational State: Operational 00:23:13.426 Entry Latency: Not Reported 00:23:13.426 Exit Latency: Not Reported 00:23:13.426 Relative Read Throughput: 0 00:23:13.426 Relative Read Latency: 0 00:23:13.426 Relative Write Throughput: 0 00:23:13.426 Relative Write Latency: 0 00:23:13.426 Idle Power: Not Reported 00:23:13.426 Active Power: Not Reported 00:23:13.426 Non-Operational Permissive Mode: Not Supported 00:23:13.426 00:23:13.426 Health Information 00:23:13.426 ================== 00:23:13.426 Critical Warnings: 00:23:13.426 Available Spare Space: OK 00:23:13.426 Temperature: OK 00:23:13.426 Device Reliability: OK 00:23:13.426 Read Only: No 00:23:13.426 Volatile Memory Backup: OK 00:23:13.426 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:13.426 Temperature Threshold: [2024-05-15 17:07:52.032403] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.426 [2024-05-15 17:07:52.032408] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe24c30) 00:23:13.426 [2024-05-15 17:07:52.032415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.426 [2024-05-15 17:07:52.032426] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8d320, cid 7, qid 0 00:23:13.426 [2024-05-15 17:07:52.032654] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.426 [2024-05-15 17:07:52.032661] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.426 [2024-05-15 17:07:52.032664] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.426 [2024-05-15 17:07:52.032668] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8d320) on tqpair=0xe24c30 00:23:13.426 [2024-05-15 17:07:52.032695] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:13.426 [2024-05-15 17:07:52.032706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.426 [2024-05-15 17:07:52.032712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.426 [2024-05-15 17:07:52.032718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.426 [2024-05-15 17:07:52.032724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.426 [2024-05-15 17:07:52.032732] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.426 [2024-05-15 17:07:52.032736] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.426 [2024-05-15 17:07:52.032740] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe24c30) 00:23:13.426 [2024-05-15 17:07:52.032747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.426 [2024-05-15 17:07:52.032758] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cda0, cid 3, qid 0 00:23:13.426 [2024-05-15 17:07:52.032914] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.427 [2024-05-15 17:07:52.032920] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.427 [2024-05-15 17:07:52.032924] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.032928] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cda0) on tqpair=0xe24c30 00:23:13.427 [2024-05-15 17:07:52.032934] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.032938] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.032941] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe24c30) 00:23:13.427 [2024-05-15 17:07:52.032948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.427 [2024-05-15 17:07:52.032963] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cda0, cid 3, qid 0 00:23:13.427 [2024-05-15 17:07:52.033155] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.427 [2024-05-15 17:07:52.033161] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.427 [2024-05-15 17:07:52.033165] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033169] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cda0) on tqpair=0xe24c30 00:23:13.427 [2024-05-15 17:07:52.033173] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:13.427 [2024-05-15 17:07:52.033178] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:13.427 [2024-05-15 17:07:52.033187] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033191] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033194] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe24c30) 00:23:13.427 [2024-05-15 17:07:52.033201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.427 [2024-05-15 17:07:52.033211] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cda0, cid 3, qid 0 00:23:13.427 [2024-05-15 17:07:52.033367] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.427 [2024-05-15 17:07:52.033373] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.427 [2024-05-15 17:07:52.033376] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033380] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cda0) on tqpair=0xe24c30 00:23:13.427 [2024-05-15 17:07:52.033390] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033394] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe24c30) 00:23:13.427 [2024-05-15 17:07:52.033404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.427 [2024-05-15 17:07:52.033413] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cda0, cid 3, qid 0 00:23:13.427 [2024-05-15 17:07:52.033607] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.427 [2024-05-15 17:07:52.033614] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.427 [2024-05-15 17:07:52.033617] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033621] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cda0) on tqpair=0xe24c30 00:23:13.427 [2024-05-15 17:07:52.033630] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033634] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033638] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe24c30) 00:23:13.427 [2024-05-15 17:07:52.033644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.427 [2024-05-15 17:07:52.033654] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cda0, cid 3, qid 0 00:23:13.427 [2024-05-15 17:07:52.033871] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.427 [2024-05-15 17:07:52.033877] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.427 [2024-05-15 17:07:52.033880] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033884] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cda0) on tqpair=0xe24c30 00:23:13.427 [2024-05-15 17:07:52.033893] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033897] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.033901] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe24c30) 00:23:13.427 [2024-05-15 17:07:52.033909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.427 [2024-05-15 17:07:52.033919] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cda0, cid 3, qid 0 00:23:13.427 [2024-05-15 17:07:52.034142] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.427 [2024-05-15 17:07:52.034148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.427 [2024-05-15 17:07:52.034152] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.034155] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cda0) on tqpair=0xe24c30 00:23:13.427 [2024-05-15 17:07:52.034165] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.034169] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.034172] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe24c30) 00:23:13.427 [2024-05-15 17:07:52.034179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.427 [2024-05-15 17:07:52.034188] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cda0, cid 3, qid 0 00:23:13.427 [2024-05-15 17:07:52.034359] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.427 [2024-05-15 17:07:52.034366] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.427 [2024-05-15 17:07:52.034369] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.034373] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cda0) on tqpair=0xe24c30 00:23:13.427 [2024-05-15 17:07:52.034382] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.034386] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.034389] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe24c30) 00:23:13.427 [2024-05-15 17:07:52.034396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.427 [2024-05-15 17:07:52.034406] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cda0, cid 3, qid 0 00:23:13.427 [2024-05-15 17:07:52.038555] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.427 [2024-05-15 17:07:52.038564] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.427 [2024-05-15 17:07:52.038567] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.038571] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cda0) on tqpair=0xe24c30 00:23:13.427 [2024-05-15 17:07:52.038581] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.038585] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.038589] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe24c30) 00:23:13.427 [2024-05-15 17:07:52.038595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.427 [2024-05-15 17:07:52.038607] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe8cda0, cid 3, qid 0 00:23:13.427 [2024-05-15 17:07:52.038776] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:13.427 [2024-05-15 17:07:52.038783] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:13.427 [2024-05-15 17:07:52.038786] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:13.427 [2024-05-15 17:07:52.038790] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe8cda0) on tqpair=0xe24c30 00:23:13.427 [2024-05-15 17:07:52.038797] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:13.427 0 Kelvin (-273 Celsius) 00:23:13.427 Available Spare: 0% 00:23:13.427 Available Spare Threshold: 0% 00:23:13.427 Life Percentage Used: 0% 00:23:13.427 Data Units Read: 0 00:23:13.427 Data Units Written: 0 00:23:13.427 Host Read Commands: 0 00:23:13.427 Host Write Commands: 0 00:23:13.427 Controller Busy Time: 0 minutes 00:23:13.427 Power Cycles: 0 00:23:13.427 Power On Hours: 0 hours 00:23:13.427 Unsafe Shutdowns: 0 00:23:13.427 Unrecoverable Media Errors: 0 00:23:13.427 Lifetime Error Log Entries: 0 00:23:13.427 Warning Temperature Time: 0 minutes 00:23:13.427 Critical Temperature Time: 0 minutes 00:23:13.427 00:23:13.427 Number of Queues 00:23:13.427 ================ 00:23:13.427 Number of I/O Submission Queues: 127 00:23:13.427 Number of I/O Completion Queues: 127 00:23:13.427 00:23:13.427 Active Namespaces 00:23:13.427 ================= 00:23:13.427 Namespace ID:1 00:23:13.427 Error Recovery Timeout: Unlimited 00:23:13.427 Command Set Identifier: NVM (00h) 00:23:13.427 Deallocate: Supported 00:23:13.427 Deallocated/Unwritten Error: Not Supported 00:23:13.427 Deallocated Read Value: Unknown 00:23:13.427 Deallocate in Write Zeroes: Not Supported 00:23:13.427 Deallocated Guard Field: 0xFFFF 00:23:13.428 Flush: Supported 00:23:13.428 Reservation: Supported 00:23:13.428 Namespace Sharing Capabilities: Multiple Controllers 00:23:13.428 Size (in LBAs): 131072 (0GiB) 00:23:13.428 Capacity (in LBAs): 131072 (0GiB) 00:23:13.428 Utilization (in LBAs): 131072 (0GiB) 00:23:13.428 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:13.428 EUI64: ABCDEF0123456789 00:23:13.428 UUID: fec3a933-27f8-4199-ad86-28140df15f21 00:23:13.428 Thin Provisioning: Not Supported 00:23:13.428 Per-NS Atomic Units: Yes 00:23:13.428 Atomic Boundary Size (Normal): 0 00:23:13.428 Atomic Boundary Size (PFail): 0 00:23:13.428 Atomic Boundary Offset: 0 00:23:13.428 Maximum Single Source Range Length: 65535 00:23:13.428 Maximum Copy Length: 65535 00:23:13.428 Maximum Source Range Count: 1 00:23:13.428 NGUID/EUI64 Never Reused: No 00:23:13.428 Namespace Write Protected: No 00:23:13.428 Number of LBA Formats: 1 00:23:13.428 Current LBA Format: LBA Format #00 00:23:13.428 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:13.428 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:13.428 rmmod nvme_tcp 00:23:13.428 rmmod nvme_fabrics 00:23:13.428 rmmod nvme_keyring 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1553541 ']' 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1553541 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 1553541 ']' 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 1553541 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1553541 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1553541' 00:23:13.428 killing process with pid 1553541 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 1553541 00:23:13.428 [2024-05-15 17:07:52.182110] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:13.428 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 1553541 00:23:13.690 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:13.690 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:13.690 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:13.690 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.690 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:13.690 17:07:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.690 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.690 17:07:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.606 17:07:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:15.606 00:23:15.606 real 0m10.927s 00:23:15.606 user 0m7.543s 00:23:15.606 sys 0m5.633s 00:23:15.606 17:07:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:15.606 17:07:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.606 ************************************ 00:23:15.606 END TEST nvmf_identify 00:23:15.606 ************************************ 00:23:15.867 17:07:54 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:15.867 17:07:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:15.867 17:07:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:15.867 17:07:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.867 ************************************ 00:23:15.867 START TEST nvmf_perf 00:23:15.867 ************************************ 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:15.867 * Looking for test storage... 00:23:15.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:15.867 17:07:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:24.005 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:24.005 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.005 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:24.006 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:24.006 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:23:24.006 00:23:24.006 --- 10.0.0.2 ping statistics --- 00:23:24.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.006 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:23:24.006 00:23:24.006 --- 10.0.0.1 ping statistics --- 00:23:24.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.006 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1557919 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1557919 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 1557919 ']' 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:24.006 17:08:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:24.006 [2024-05-15 17:08:01.711796] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:23:24.006 [2024-05-15 17:08:01.711864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.006 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.006 [2024-05-15 17:08:01.783991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.006 [2024-05-15 17:08:01.860292] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.006 [2024-05-15 17:08:01.860332] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.006 [2024-05-15 17:08:01.860341] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.006 [2024-05-15 17:08:01.860347] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.006 [2024-05-15 17:08:01.860353] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.006 [2024-05-15 17:08:01.860531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.006 [2024-05-15 17:08:01.860650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.006 [2024-05-15 17:08:01.860767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.006 [2024-05-15 17:08:01.860769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.006 17:08:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.006 17:08:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:23:24.006 17:08:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.006 17:08:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.006 17:08:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:24.006 17:08:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.006 17:08:02 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:24.006 17:08:02 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:24.267 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:24.267 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:24.528 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:24.528 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:24.789 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:24.789 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:24.789 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:24.789 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:24.789 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.789 [2024-05-15 17:08:03.504891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.789 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:25.052 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:25.052 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.052 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:25.052 17:08:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:25.313 17:08:04 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.574 [2024-05-15 17:08:04.175134] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:25.574 [2024-05-15 17:08:04.175392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.574 17:08:04 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:25.574 17:08:04 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:25.574 17:08:04 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:25.574 17:08:04 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:25.574 17:08:04 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:26.959 Initializing NVMe Controllers 00:23:26.959 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:26.959 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:26.959 Initialization complete. Launching workers. 00:23:26.959 ======================================================== 00:23:26.959 Latency(us) 00:23:26.959 Device Information : IOPS MiB/s Average min max 00:23:26.959 PCIE (0000:65:00.0) NSID 1 from core 0: 79515.63 310.61 402.84 14.04 4515.74 00:23:26.959 ======================================================== 00:23:26.959 Total : 79515.63 310.61 402.84 14.04 4515.74 00:23:26.959 00:23:26.959 17:08:05 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:26.959 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.347 Initializing NVMe Controllers 00:23:28.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:28.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:28.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:28.347 Initialization complete. Launching workers. 00:23:28.347 ======================================================== 00:23:28.347 Latency(us) 00:23:28.347 Device Information : IOPS MiB/s Average min max 00:23:28.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 56.00 0.22 18248.52 317.43 46166.64 00:23:28.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21839.23 6984.47 47906.03 00:23:28.347 ======================================================== 00:23:28.347 Total : 102.00 0.40 19867.86 317.43 47906.03 00:23:28.347 00:23:28.347 17:08:07 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:28.347 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.735 Initializing NVMe Controllers 00:23:29.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:29.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:29.735 Initialization complete. Launching workers. 00:23:29.735 ======================================================== 00:23:29.735 Latency(us) 00:23:29.735 Device Information : IOPS MiB/s Average min max 00:23:29.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10989.00 42.93 2917.21 484.59 6443.87 00:23:29.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3907.00 15.26 8228.34 6299.51 15782.82 00:23:29.735 ======================================================== 00:23:29.735 Total : 14896.00 58.19 4310.24 484.59 15782.82 00:23:29.735 00:23:29.735 17:08:08 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:29.735 17:08:08 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:29.735 17:08:08 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:29.735 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.281 Initializing NVMe Controllers 00:23:32.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.281 Controller IO queue size 128, less than required. 00:23:32.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.281 Controller IO queue size 128, less than required. 00:23:32.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:32.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:32.281 Initialization complete. Launching workers. 00:23:32.281 ======================================================== 00:23:32.281 Latency(us) 00:23:32.281 Device Information : IOPS MiB/s Average min max 00:23:32.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1908.11 477.03 68013.54 32444.24 104544.09 00:23:32.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 594.48 148.62 222767.72 79726.78 334845.94 00:23:32.281 ======================================================== 00:23:32.281 Total : 2502.58 625.65 104774.60 32444.24 334845.94 00:23:32.281 00:23:32.281 17:08:11 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:32.281 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.541 No valid NVMe controllers or AIO or URING devices found 00:23:32.541 Initializing NVMe Controllers 00:23:32.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.541 Controller IO queue size 128, less than required. 00:23:32.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.541 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:32.541 Controller IO queue size 128, less than required. 00:23:32.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:32.541 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:32.541 WARNING: Some requested NVMe devices were skipped 00:23:32.541 17:08:11 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:32.541 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.084 Initializing NVMe Controllers 00:23:35.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:35.084 Controller IO queue size 128, less than required. 00:23:35.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.084 Controller IO queue size 128, less than required. 00:23:35.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:35.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:35.084 Initialization complete. Launching workers. 00:23:35.084 00:23:35.084 ==================== 00:23:35.084 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:35.084 TCP transport: 00:23:35.084 polls: 27499 00:23:35.084 idle_polls: 12602 00:23:35.084 sock_completions: 14897 00:23:35.084 nvme_completions: 5679 00:23:35.084 submitted_requests: 8522 00:23:35.084 queued_requests: 1 00:23:35.084 00:23:35.084 ==================== 00:23:35.084 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:35.084 TCP transport: 00:23:35.084 polls: 27234 00:23:35.084 idle_polls: 12991 00:23:35.084 sock_completions: 14243 00:23:35.084 nvme_completions: 5675 00:23:35.084 submitted_requests: 8492 00:23:35.084 queued_requests: 1 00:23:35.084 ======================================================== 00:23:35.084 Latency(us) 00:23:35.084 Device Information : IOPS MiB/s Average min max 00:23:35.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1419.45 354.86 92731.29 43827.42 152974.04 00:23:35.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1418.45 354.61 91328.23 38001.93 138418.62 00:23:35.084 ======================================================== 00:23:35.084 Total : 2837.90 709.48 92030.01 38001.93 152974.04 00:23:35.084 00:23:35.084 17:08:13 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:35.084 17:08:13 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.345 rmmod nvme_tcp 00:23:35.345 rmmod nvme_fabrics 00:23:35.345 rmmod nvme_keyring 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1557919 ']' 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1557919 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 1557919 ']' 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 1557919 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1557919 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1557919' 00:23:35.345 killing process with pid 1557919 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 1557919 00:23:35.345 [2024-05-15 17:08:14.151224] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:35.345 17:08:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 1557919 00:23:37.886 17:08:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:37.886 17:08:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:37.886 17:08:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:37.886 17:08:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.886 17:08:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.887 17:08:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.887 17:08:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.887 17:08:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.798 17:08:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.798 00:23:39.798 real 0m23.747s 00:23:39.798 user 0m58.424s 00:23:39.798 sys 0m7.831s 00:23:39.798 17:08:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:39.798 17:08:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:39.798 ************************************ 00:23:39.798 END TEST nvmf_perf 00:23:39.798 ************************************ 00:23:39.798 17:08:18 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:39.798 17:08:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:39.798 17:08:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:39.798 17:08:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.798 ************************************ 00:23:39.798 START TEST nvmf_fio_host 00:23:39.798 ************************************ 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:39.798 * Looking for test storage... 00:23:39.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.798 17:08:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.799 17:08:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:46.388 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:46.388 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.388 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:46.388 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:46.389 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.389 17:08:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.389 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:23:46.651 00:23:46.651 --- 10.0.0.2 ping statistics --- 00:23:46.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.651 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:23:46.651 00:23:46.651 --- 10.0.0.1 ping statistics --- 00:23:46.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.651 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=1564825 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 1564825 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 1564825 ']' 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:46.651 17:08:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.651 [2024-05-15 17:08:25.396897] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:23:46.651 [2024-05-15 17:08:25.396959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.651 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.651 [2024-05-15 17:08:25.467876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.913 [2024-05-15 17:08:25.543210] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.913 [2024-05-15 17:08:25.543246] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.913 [2024-05-15 17:08:25.543254] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.913 [2024-05-15 17:08:25.543260] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.913 [2024-05-15 17:08:25.543265] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.913 [2024-05-15 17:08:25.543409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.913 [2024-05-15 17:08:25.543533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.913 [2024-05-15 17:08:25.543688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.913 [2024-05-15 17:08:25.543841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.485 [2024-05-15 17:08:26.186007] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.485 Malloc1 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.485 [2024-05-15 17:08:26.285325] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:47.485 [2024-05-15 17:08:26.285557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.485 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:23:47.486 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:47.773 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:47.773 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:47.773 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.773 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:47.773 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:47.773 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:47.773 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:47.773 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:47.773 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:47.773 17:08:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:48.034 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:48.034 fio-3.35 00:23:48.034 Starting 1 thread 00:23:48.034 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.577 00:23:50.577 test: (groupid=0, jobs=1): err= 0: pid=1565322: Wed May 15 17:08:28 2024 00:23:50.577 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(79.3MiB/2004msec) 00:23:50.577 slat (usec): min=2, max=273, avg= 2.27, stdev= 2.79 00:23:50.577 clat (usec): min=4127, max=9203, avg=6980.14, stdev=853.09 00:23:50.577 lat (usec): min=4129, max=9210, avg=6982.41, stdev=853.07 00:23:50.577 clat percentiles (usec): 00:23:50.577 | 1.00th=[ 4621], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 6587], 00:23:50.577 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:23:50.577 | 70.00th=[ 7439], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8029], 00:23:50.577 | 99.00th=[ 8455], 99.50th=[ 8455], 99.90th=[ 8848], 99.95th=[ 8979], 00:23:50.577 | 99.99th=[ 8979] 00:23:50.577 bw ( KiB/s): min=38520, max=44472, per=99.89%, avg=40484.00, stdev=2696.92, samples=4 00:23:50.577 iops : min= 9630, max=11118, avg=10121.00, stdev=674.23, samples=4 00:23:50.577 write: IOPS=10.1k, BW=39.6MiB/s (41.6MB/s)(79.4MiB/2004msec); 0 zone resets 00:23:50.577 slat (usec): min=2, max=264, avg= 2.36, stdev= 2.06 00:23:50.577 clat (usec): min=2905, max=8120, avg=5614.07, stdev=684.88 00:23:50.577 lat (usec): min=2936, max=8123, avg=5616.43, stdev=684.90 00:23:50.577 clat percentiles (usec): 00:23:50.577 | 1.00th=[ 3720], 5.00th=[ 4015], 10.00th=[ 4359], 20.00th=[ 5276], 00:23:50.577 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5866], 00:23:50.577 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6259], 95.00th=[ 6456], 00:23:50.577 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 7242], 99.95th=[ 7373], 00:23:50.577 | 99.99th=[ 7898] 00:23:50.577 bw ( KiB/s): min=38896, max=45184, per=99.88%, avg=40546.00, stdev=3093.17, samples=4 00:23:50.577 iops : min= 9724, max=11296, avg=10136.50, stdev=773.29, samples=4 00:23:50.577 lat (msec) : 4=2.37%, 10=97.63% 00:23:50.577 cpu : usr=71.24%, sys=27.01%, ctx=53, majf=0, minf=4 00:23:50.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:50.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:50.577 issued rwts: total=20305,20337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:50.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:50.577 00:23:50.577 Run status group 0 (all jobs): 00:23:50.577 READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=79.3MiB (83.2MB), run=2004-2004msec 00:23:50.577 WRITE: bw=39.6MiB/s (41.6MB/s), 39.6MiB/s-39.6MiB/s (41.6MB/s-41.6MB/s), io=79.4MiB (83.3MB), run=2004-2004msec 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:23:50.577 17:08:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:50.577 17:08:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:50.577 17:08:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:50.577 17:08:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.577 17:08:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:50.577 17:08:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:50.577 17:08:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:50.577 17:08:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:50.577 17:08:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:50.577 17:08:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:50.577 17:08:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:50.577 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:50.577 fio-3.35 00:23:50.577 Starting 1 thread 00:23:50.578 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.223 00:23:53.223 test: (groupid=0, jobs=1): err= 0: pid=1565840: Wed May 15 17:08:31 2024 00:23:53.223 read: IOPS=9220, BW=144MiB/s (151MB/s)(289MiB/2006msec) 00:23:53.223 slat (usec): min=3, max=111, avg= 3.72, stdev= 1.59 00:23:53.223 clat (usec): min=1487, max=16579, avg=8330.16, stdev=1956.07 00:23:53.223 lat (usec): min=1490, max=16583, avg=8333.88, stdev=1956.16 00:23:53.223 clat percentiles (usec): 00:23:53.223 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6652], 00:23:53.223 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 8225], 60.00th=[ 8717], 00:23:53.223 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10814], 95.00th=[11600], 00:23:53.223 | 99.00th=[13435], 99.50th=[13960], 99.90th=[15533], 99.95th=[16057], 00:23:53.223 | 99.99th=[16581] 00:23:53.223 bw ( KiB/s): min=67200, max=82304, per=49.96%, avg=73704.00, stdev=6448.95, samples=4 00:23:53.223 iops : min= 4200, max= 5144, avg=4606.50, stdev=403.06, samples=4 00:23:53.223 write: IOPS=5375, BW=84.0MiB/s (88.1MB/s)(151MiB/1793msec); 0 zone resets 00:23:53.223 slat (usec): min=40, max=526, avg=41.23, stdev= 8.56 00:23:53.223 clat (usec): min=2354, max=17976, avg=9538.40, stdev=1501.19 00:23:53.223 lat (usec): min=2398, max=18016, avg=9579.63, stdev=1502.21 00:23:53.223 clat percentiles (usec): 00:23:53.223 | 1.00th=[ 6587], 5.00th=[ 7373], 10.00th=[ 7832], 20.00th=[ 8356], 00:23:53.223 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:23:53.223 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11338], 95.00th=[12125], 00:23:53.223 | 99.00th=[13566], 99.50th=[14091], 99.90th=[16712], 99.95th=[17433], 00:23:53.223 | 99.99th=[17957] 00:23:53.223 bw ( KiB/s): min=70112, max=85376, per=89.33%, avg=76832.00, stdev=6495.97, samples=4 00:23:53.223 iops : min= 4382, max= 5336, avg=4802.00, stdev=406.00, samples=4 00:23:53.223 lat (msec) : 2=0.04%, 4=0.42%, 10=74.28%, 20=25.26% 00:23:53.223 cpu : usr=83.34%, sys=14.51%, ctx=14, majf=0, minf=4 00:23:53.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:23:53.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:53.223 issued rwts: total=18496,9638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:53.223 00:23:53.223 Run status group 0 (all jobs): 00:23:53.223 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=289MiB (303MB), run=2006-2006msec 00:23:53.223 WRITE: bw=84.0MiB/s (88.1MB/s), 84.0MiB/s-84.0MiB/s (88.1MB/s-88.1MB/s), io=151MiB (158MB), run=1793-1793msec 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.223 rmmod nvme_tcp 00:23:53.223 rmmod nvme_fabrics 00:23:53.223 rmmod nvme_keyring 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1564825 ']' 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1564825 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 1564825 ']' 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 1564825 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1564825 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1564825' 00:23:53.223 killing process with pid 1564825 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 1564825 00:23:53.223 [2024-05-15 17:08:31.832912] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 1564825 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.223 17:08:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.774 17:08:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:55.774 00:23:55.774 real 0m15.802s 00:23:55.774 user 1m4.570s 00:23:55.774 sys 0m6.925s 00:23:55.774 17:08:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:55.774 17:08:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.774 ************************************ 00:23:55.774 END TEST nvmf_fio_host 00:23:55.774 ************************************ 00:23:55.774 17:08:34 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:55.774 17:08:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:55.774 17:08:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:55.774 17:08:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:55.774 ************************************ 00:23:55.774 START TEST nvmf_failover 00:23:55.774 ************************************ 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:55.774 * Looking for test storage... 00:23:55.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:55.774 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:55.775 17:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:02.360 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:02.360 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.360 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:02.361 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:02.361 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.361 17:08:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:02.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:24:02.361 00:24:02.361 --- 10.0.0.2 ping statistics --- 00:24:02.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.361 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:24:02.361 00:24:02.361 --- 10.0.0.1 ping statistics --- 00:24:02.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.361 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:02.361 17:08:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:02.622 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1570452 00:24:02.622 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1570452 00:24:02.622 17:08:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:02.622 17:08:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1570452 ']' 00:24:02.622 17:08:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.622 17:08:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:02.622 17:08:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.622 17:08:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:02.622 17:08:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:02.622 [2024-05-15 17:08:41.254504] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:24:02.622 [2024-05-15 17:08:41.254563] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.622 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.622 [2024-05-15 17:08:41.337989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:02.622 [2024-05-15 17:08:41.393128] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.622 [2024-05-15 17:08:41.393157] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.622 [2024-05-15 17:08:41.393164] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.622 [2024-05-15 17:08:41.393169] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.622 [2024-05-15 17:08:41.393174] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.622 [2024-05-15 17:08:41.393299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.622 [2024-05-15 17:08:41.393455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.622 [2024-05-15 17:08:41.393457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.194 17:08:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:03.194 17:08:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:03.194 17:08:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:03.194 17:08:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.194 17:08:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:03.455 17:08:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.455 17:08:42 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:03.455 [2024-05-15 17:08:42.188609] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.455 17:08:42 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:03.716 Malloc0 00:24:03.716 17:08:42 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.716 17:08:42 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:03.977 17:08:42 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:04.239 [2024-05-15 17:08:42.838225] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:04.239 [2024-05-15 17:08:42.838443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.239 17:08:42 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:04.239 [2024-05-15 17:08:42.990800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:04.239 17:08:43 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:04.500 [2024-05-15 17:08:43.159315] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:04.500 17:08:43 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1570813 00:24:04.500 17:08:43 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.500 17:08:43 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:04.500 17:08:43 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1570813 /var/tmp/bdevperf.sock 00:24:04.500 17:08:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1570813 ']' 00:24:04.500 17:08:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.500 17:08:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:04.500 17:08:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.500 17:08:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:04.500 17:08:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:05.443 17:08:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:05.443 17:08:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:05.443 17:08:44 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:05.704 NVMe0n1 00:24:05.704 17:08:44 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:05.965 00:24:05.965 17:08:44 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1571150 00:24:05.965 17:08:44 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:05.965 17:08:44 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:07.350 17:08:45 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:07.350 [2024-05-15 17:08:45.904415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.350 [2024-05-15 17:08:45.904811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 [2024-05-15 17:08:45.904976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2748ef0 is same with the state(5) to be set 00:24:07.351 17:08:45 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:10.647 17:08:48 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:10.647 00:24:10.648 17:08:49 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:10.648 [2024-05-15 17:08:49.342358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 [2024-05-15 17:08:49.342524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2749720 is same with the state(5) to be set 00:24:10.648 17:08:49 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:13.942 17:08:52 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.942 [2024-05-15 17:08:52.518961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.942 17:08:52 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:14.889 17:08:53 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:14.889 [2024-05-15 17:08:53.697757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.697999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.889 [2024-05-15 17:08:53.698056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:14.890 [2024-05-15 17:08:53.698202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ef260 is same with the state(5) to be set 00:24:15.150 17:08:53 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1571150 00:24:21.735 0 00:24:21.735 17:08:59 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1570813 00:24:21.735 17:08:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1570813 ']' 00:24:21.735 17:08:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1570813 00:24:21.735 17:08:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:21.735 17:08:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:21.735 17:08:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1570813 00:24:21.735 17:08:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:21.735 17:08:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:21.735 17:08:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1570813' 00:24:21.735 killing process with pid 1570813 00:24:21.735 17:08:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1570813 00:24:21.735 17:08:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1570813 00:24:21.735 17:09:00 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:21.735 [2024-05-15 17:08:43.235897] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:24:21.735 [2024-05-15 17:08:43.235956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1570813 ] 00:24:21.735 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.735 [2024-05-15 17:08:43.294592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.735 [2024-05-15 17:08:43.358855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.735 Running I/O for 15 seconds... 00:24:21.735 [2024-05-15 17:08:45.905837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.735 [2024-05-15 17:08:45.905872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.735 [2024-05-15 17:08:45.905889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.735 [2024-05-15 17:08:45.905898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.735 [2024-05-15 17:08:45.905908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.735 [2024-05-15 17:08:45.905915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.735 [2024-05-15 17:08:45.905925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.735 [2024-05-15 17:08:45.905932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.735 [2024-05-15 17:08:45.905942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.735 [2024-05-15 17:08:45.905948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.735 [2024-05-15 17:08:45.905957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.735 [2024-05-15 17:08:45.905964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.735 [2024-05-15 17:08:45.905974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.735 [2024-05-15 17:08:45.905981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.735 [2024-05-15 17:08:45.905990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.735 [2024-05-15 17:08:45.905996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.735 [2024-05-15 17:08:45.906005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.735 [2024-05-15 17:08:45.906013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.736 [2024-05-15 17:08:45.906680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.736 [2024-05-15 17:08:45.906687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.906987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.906994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.737 [2024-05-15 17:08:45.907342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.737 [2024-05-15 17:08:45.907349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.738 [2024-05-15 17:08:45.907365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.738 [2024-05-15 17:08:45.907929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.907950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.738 [2024-05-15 17:08:45.907958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.738 [2024-05-15 17:08:45.907965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96840 len:8 PRP1 0x0 PRP2 0x0 00:24:21.738 [2024-05-15 17:08:45.907974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.908010] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf0e240 was disconnected and freed. reset controller. 00:24:21.738 [2024-05-15 17:08:45.908026] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:21.738 [2024-05-15 17:08:45.908046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.738 [2024-05-15 17:08:45.908054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.908062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.738 [2024-05-15 17:08:45.908069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.738 [2024-05-15 17:08:45.908077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.738 [2024-05-15 17:08:45.908084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:45.908092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.739 [2024-05-15 17:08:45.908099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:45.908106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.739 [2024-05-15 17:08:45.911682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:21.739 [2024-05-15 17:08:45.911704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeef2d0 (9): Bad file descriptor 00:24:21.739 [2024-05-15 17:08:46.118991] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:21.739 [2024-05-15 17:08:49.344802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.344839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.344854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.344863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.344877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.344884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.344894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.344901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.344911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.344918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.344927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.344934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.344943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.344950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.344960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.344967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.344976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.344984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.344993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.739 [2024-05-15 17:08:49.345253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.739 [2024-05-15 17:08:49.345270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.739 [2024-05-15 17:08:49.345286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.739 [2024-05-15 17:08:49.345306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.739 [2024-05-15 17:08:49.345322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.739 [2024-05-15 17:08:49.345338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.739 [2024-05-15 17:08:49.345354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.739 [2024-05-15 17:08:49.345363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.739 [2024-05-15 17:08:49.345369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.740 [2024-05-15 17:08:49.345975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.740 [2024-05-15 17:08:49.345985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.741 [2024-05-15 17:08:49.345992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.741 [2024-05-15 17:08:49.346008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.741 [2024-05-15 17:08:49.346023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.741 [2024-05-15 17:08:49.346040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50808 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50816 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50824 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50832 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50840 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50848 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50856 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50864 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50872 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50880 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50888 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50896 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50904 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50912 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50920 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50928 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50936 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50944 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50952 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50960 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50968 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.741 [2024-05-15 17:08:49.346623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50976 len:8 PRP1 0x0 PRP2 0x0 00:24:21.741 [2024-05-15 17:08:49.346630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.741 [2024-05-15 17:08:49.346637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.741 [2024-05-15 17:08:49.346642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50984 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50992 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51000 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51008 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51016 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51024 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51032 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51040 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51048 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51056 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51064 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51072 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51080 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.346981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.346986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.346992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51088 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.346999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.347006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.347012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.347018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51096 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.347025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.347032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.347038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.347043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51104 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.347051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.347058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.347063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.347070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51112 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.347077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.347084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.347089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.347095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51120 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.347102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.347110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.347115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.347121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50304 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.347128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.347136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.347141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.347147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50312 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.347155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.347163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.347168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.357730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50320 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.357758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.357772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.357777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.357784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50328 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.357791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.357798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.357804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.357811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50336 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.357818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.357825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.357830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.357837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50344 len:8 PRP1 0x0 PRP2 0x0 00:24:21.742 [2024-05-15 17:08:49.357844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.742 [2024-05-15 17:08:49.357851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.742 [2024-05-15 17:08:49.357856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.742 [2024-05-15 17:08:49.357862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50352 len:8 PRP1 0x0 PRP2 0x0 00:24:21.743 [2024-05-15 17:08:49.357869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.357876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.743 [2024-05-15 17:08:49.357881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.743 [2024-05-15 17:08:49.357888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50360 len:8 PRP1 0x0 PRP2 0x0 00:24:21.743 [2024-05-15 17:08:49.357894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.357902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.743 [2024-05-15 17:08:49.357907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.743 [2024-05-15 17:08:49.357913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50368 len:8 PRP1 0x0 PRP2 0x0 00:24:21.743 [2024-05-15 17:08:49.357919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.357927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.743 [2024-05-15 17:08:49.357932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.743 [2024-05-15 17:08:49.357942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50376 len:8 PRP1 0x0 PRP2 0x0 00:24:21.743 [2024-05-15 17:08:49.357949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.357957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.743 [2024-05-15 17:08:49.357962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.743 [2024-05-15 17:08:49.357968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50384 len:8 PRP1 0x0 PRP2 0x0 00:24:21.743 [2024-05-15 17:08:49.357975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.357982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.743 [2024-05-15 17:08:49.357987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.743 [2024-05-15 17:08:49.357993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50392 len:8 PRP1 0x0 PRP2 0x0 00:24:21.743 [2024-05-15 17:08:49.358000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.358007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.743 [2024-05-15 17:08:49.358012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.743 [2024-05-15 17:08:49.358018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50400 len:8 PRP1 0x0 PRP2 0x0 00:24:21.743 [2024-05-15 17:08:49.358025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.358032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.743 [2024-05-15 17:08:49.358037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.743 [2024-05-15 17:08:49.358043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50408 len:8 PRP1 0x0 PRP2 0x0 00:24:21.743 [2024-05-15 17:08:49.358050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.358057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.743 [2024-05-15 17:08:49.358062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.743 [2024-05-15 17:08:49.358068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50416 len:8 PRP1 0x0 PRP2 0x0 00:24:21.743 [2024-05-15 17:08:49.358076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.358112] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf10210 was disconnected and freed. reset controller. 00:24:21.743 [2024-05-15 17:08:49.358121] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:21.743 [2024-05-15 17:08:49.358147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.743 [2024-05-15 17:08:49.358156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.358165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.743 [2024-05-15 17:08:49.358173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.358181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.743 [2024-05-15 17:08:49.358191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.358199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.743 [2024-05-15 17:08:49.358205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:49.358213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.743 [2024-05-15 17:08:49.358250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeef2d0 (9): Bad file descriptor 00:24:21.743 [2024-05-15 17:08:49.361802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:21.743 [2024-05-15 17:08:49.535169] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:21.743 [2024-05-15 17:08:53.698757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.698988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.698997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.699004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.699013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.699020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.699030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.699037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.699045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.699052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.699062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.699068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.699077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.699084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.699093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.699100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.699109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.743 [2024-05-15 17:08:53.699116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.743 [2024-05-15 17:08:53.699125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.744 [2024-05-15 17:08:53.699670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.744 [2024-05-15 17:08:53.699679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.699986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.699993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.745 [2024-05-15 17:08:53.700328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.745 [2024-05-15 17:08:53.700338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.746 [2024-05-15 17:08:53.700846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.746 [2024-05-15 17:08:53.700876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.746 [2024-05-15 17:08:53.700882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90696 len:8 PRP1 0x0 PRP2 0x0 00:24:21.746 [2024-05-15 17:08:53.700890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700929] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf12290 was disconnected and freed. reset controller. 00:24:21.746 [2024-05-15 17:08:53.700938] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:21.746 [2024-05-15 17:08:53.700958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.746 [2024-05-15 17:08:53.700966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.746 [2024-05-15 17:08:53.700981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.700988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.746 [2024-05-15 17:08:53.700996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.701003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.746 [2024-05-15 17:08:53.701011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.746 [2024-05-15 17:08:53.701018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.746 [2024-05-15 17:08:53.704595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:21.746 [2024-05-15 17:08:53.704620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeef2d0 (9): Bad file descriptor 00:24:21.746 [2024-05-15 17:08:53.872901] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:21.746 00:24:21.746 Latency(us) 00:24:21.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.746 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:21.746 Verification LBA range: start 0x0 length 0x4000 00:24:21.746 NVMe0n1 : 15.01 10909.73 42.62 1301.16 0.00 10454.19 552.96 20206.93 00:24:21.747 =================================================================================================================== 00:24:21.747 Total : 10909.73 42.62 1301.16 0.00 10454.19 552.96 20206.93 00:24:21.747 Received shutdown signal, test time was about 15.000000 seconds 00:24:21.747 00:24:21.747 Latency(us) 00:24:21.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.747 =================================================================================================================== 00:24:21.747 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1574129 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1574129 /var/tmp/bdevperf.sock 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1574129 ']' 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:21.747 17:09:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:22.317 17:09:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:22.317 17:09:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:22.317 17:09:00 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:22.317 [2024-05-15 17:09:01.066949] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:22.317 17:09:01 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:22.577 [2024-05-15 17:09:01.227292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:22.578 17:09:01 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:22.838 NVMe0n1 00:24:22.838 17:09:01 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:23.098 00:24:23.098 17:09:01 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:23.667 00:24:23.667 17:09:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:23.667 17:09:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:23.667 17:09:02 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:23.926 17:09:02 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:27.230 17:09:05 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:27.230 17:09:05 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:27.230 17:09:05 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.230 17:09:05 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1575222 00:24:27.230 17:09:05 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1575222 00:24:28.169 0 00:24:28.169 17:09:06 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:28.169 [2024-05-15 17:09:00.153013] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:24:28.169 [2024-05-15 17:09:00.153070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574129 ] 00:24:28.169 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.169 [2024-05-15 17:09:00.211780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.169 [2024-05-15 17:09:00.277215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.169 [2024-05-15 17:09:02.596145] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:28.169 [2024-05-15 17:09:02.596191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.169 [2024-05-15 17:09:02.596202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.169 [2024-05-15 17:09:02.596211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.169 [2024-05-15 17:09:02.596219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.169 [2024-05-15 17:09:02.596227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.169 [2024-05-15 17:09:02.596234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.169 [2024-05-15 17:09:02.596242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.169 [2024-05-15 17:09:02.596249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.169 [2024-05-15 17:09:02.596256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.169 [2024-05-15 17:09:02.596278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.169 [2024-05-15 17:09:02.596292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa822d0 (9): Bad file descriptor 00:24:28.169 [2024-05-15 17:09:02.657799] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:28.169 Running I/O for 1 seconds... 00:24:28.169 00:24:28.169 Latency(us) 00:24:28.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.169 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:28.169 Verification LBA range: start 0x0 length 0x4000 00:24:28.169 NVMe0n1 : 1.00 11312.27 44.19 0.00 0.00 11261.66 1181.01 13598.72 00:24:28.169 =================================================================================================================== 00:24:28.169 Total : 11312.27 44.19 0.00 0.00 11261.66 1181.01 13598.72 00:24:28.169 17:09:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.169 17:09:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:28.429 17:09:07 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:28.429 17:09:07 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.429 17:09:07 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:28.712 17:09:07 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:29.016 17:09:07 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:32.316 17:09:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:32.316 17:09:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:32.316 17:09:10 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1574129 00:24:32.316 17:09:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1574129 ']' 00:24:32.316 17:09:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1574129 00:24:32.316 17:09:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:32.316 17:09:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:32.316 17:09:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1574129 00:24:32.316 17:09:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:32.316 17:09:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:32.316 17:09:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1574129' 00:24:32.316 killing process with pid 1574129 00:24:32.317 17:09:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1574129 00:24:32.317 17:09:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1574129 00:24:32.317 17:09:10 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:32.317 17:09:10 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:32.317 17:09:11 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:32.317 17:09:11 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:32.317 17:09:11 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:32.317 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:32.317 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:32.317 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:32.317 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:32.317 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:32.317 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:32.317 rmmod nvme_tcp 00:24:32.576 rmmod nvme_fabrics 00:24:32.576 rmmod nvme_keyring 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1570452 ']' 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1570452 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1570452 ']' 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1570452 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1570452 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1570452' 00:24:32.576 killing process with pid 1570452 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1570452 00:24:32.576 [2024-05-15 17:09:11.253846] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1570452 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.576 17:09:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.118 17:09:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:35.118 00:24:35.118 real 0m39.348s 00:24:35.118 user 2m2.493s 00:24:35.118 sys 0m7.814s 00:24:35.118 17:09:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:35.118 17:09:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:35.118 ************************************ 00:24:35.118 END TEST nvmf_failover 00:24:35.118 ************************************ 00:24:35.118 17:09:13 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:35.118 17:09:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:35.118 17:09:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:35.118 17:09:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:35.118 ************************************ 00:24:35.118 START TEST nvmf_host_discovery 00:24:35.118 ************************************ 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:35.118 * Looking for test storage... 00:24:35.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.118 17:09:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:35.119 17:09:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:41.714 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:41.714 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.714 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:41.715 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:41.715 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:41.715 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:41.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:24:41.977 00:24:41.977 --- 10.0.0.2 ping statistics --- 00:24:41.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.977 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:24:41.977 00:24:41.977 --- 10.0.0.1 ping statistics --- 00:24:41.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.977 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1580687 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1580687 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1580687 ']' 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:41.977 17:09:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.977 [2024-05-15 17:09:20.715893] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:24:41.977 [2024-05-15 17:09:20.715945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.977 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.977 [2024-05-15 17:09:20.798885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.238 [2024-05-15 17:09:20.892294] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.238 [2024-05-15 17:09:20.892350] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.238 [2024-05-15 17:09:20.892358] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.238 [2024-05-15 17:09:20.892365] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.238 [2024-05-15 17:09:20.892371] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.238 [2024-05-15 17:09:20.892396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.811 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:42.811 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:24:42.811 17:09:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:42.811 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.811 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.811 17:09:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.811 17:09:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:42.811 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.812 [2024-05-15 17:09:21.544516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.812 [2024-05-15 17:09:21.556491] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:42.812 [2024-05-15 17:09:21.556816] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.812 null0 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.812 null1 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1581014 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1581014 /tmp/host.sock 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1581014 ']' 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:42.812 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:42.812 17:09:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.073 [2024-05-15 17:09:21.651354] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:24:43.073 [2024-05-15 17:09:21.651417] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581014 ] 00:24:43.073 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.073 [2024-05-15 17:09:21.714918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.073 [2024-05-15 17:09:21.789116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.645 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.906 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.168 [2024-05-15 17:09:22.783863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:44.168 17:09:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.429 17:09:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:24:44.429 17:09:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:24:44.690 [2024-05-15 17:09:23.483781] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:44.690 [2024-05-15 17:09:23.483805] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:44.690 [2024-05-15 17:09:23.483819] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:44.951 [2024-05-15 17:09:23.572100] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:44.951 [2024-05-15 17:09:23.633283] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:44.951 [2024-05-15 17:09:23.633302] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:45.212 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:45.212 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:45.212 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:45.212 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:45.212 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:45.212 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.212 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:45.212 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.212 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:45.212 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.474 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.736 [2024-05-15 17:09:24.516441] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:45.736 [2024-05-15 17:09:24.516861] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:45.736 [2024-05-15 17:09:24.516886] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:45.736 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:45.737 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:45.737 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:45.737 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:45.737 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:45.737 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.737 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:45.737 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.737 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:45.737 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:45.999 [2024-05-15 17:09:24.646663] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:45.999 17:09:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:24:45.999 [2024-05-15 17:09:24.748553] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:45.999 [2024-05-15 17:09:24.748575] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:45.999 [2024-05-15 17:09:24.748581] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:46.943 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.208 [2024-05-15 17:09:25.783833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.208 [2024-05-15 17:09:25.783858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.208 [2024-05-15 17:09:25.783868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.208 [2024-05-15 17:09:25.783875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.208 [2024-05-15 17:09:25.783883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.208 [2024-05-15 17:09:25.783891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.208 [2024-05-15 17:09:25.783898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.208 [2024-05-15 17:09:25.783905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.208 [2024-05-15 17:09:25.783913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa32bb0 is same with the state(5) to be set 00:24:47.208 [2024-05-15 17:09:25.784395] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:47.208 [2024-05-15 17:09:25.784409] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:47.208 [2024-05-15 17:09:25.793843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa32bb0 (9): Bad file descriptor 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.208 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:47.208 [2024-05-15 17:09:25.803884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:47.208 [2024-05-15 17:09:25.804242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.208 [2024-05-15 17:09:25.804567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.208 [2024-05-15 17:09:25.804579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa32bb0 with addr=10.0.0.2, port=4420 00:24:47.208 [2024-05-15 17:09:25.804588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa32bb0 is same with the state(5) to be set 00:24:47.208 [2024-05-15 17:09:25.804600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa32bb0 (9): Bad file descriptor 00:24:47.209 [2024-05-15 17:09:25.804618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:47.209 [2024-05-15 17:09:25.804625] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:47.209 [2024-05-15 17:09:25.804633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:47.209 [2024-05-15 17:09:25.804645] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.209 [2024-05-15 17:09:25.813941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:47.209 [2024-05-15 17:09:25.814298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 [2024-05-15 17:09:25.814745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 [2024-05-15 17:09:25.814782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa32bb0 with addr=10.0.0.2, port=4420 00:24:47.209 [2024-05-15 17:09:25.814793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa32bb0 is same with the state(5) to be set 00:24:47.209 [2024-05-15 17:09:25.814812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa32bb0 (9): Bad file descriptor 00:24:47.209 [2024-05-15 17:09:25.814849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:47.209 [2024-05-15 17:09:25.814859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:47.209 [2024-05-15 17:09:25.814867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:47.209 [2024-05-15 17:09:25.814881] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.209 [2024-05-15 17:09:25.823993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:47.209 [2024-05-15 17:09:25.824351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 [2024-05-15 17:09:25.824762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 [2024-05-15 17:09:25.824799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa32bb0 with addr=10.0.0.2, port=4420 00:24:47.209 [2024-05-15 17:09:25.824810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa32bb0 is same with the state(5) to be set 00:24:47.209 [2024-05-15 17:09:25.824829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa32bb0 (9): Bad file descriptor 00:24:47.209 [2024-05-15 17:09:25.824842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:47.209 [2024-05-15 17:09:25.824849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:47.209 [2024-05-15 17:09:25.824857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:47.209 [2024-05-15 17:09:25.824872] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.209 [2024-05-15 17:09:25.834051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.209 [2024-05-15 17:09:25.834371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:47.209 [2024-05-15 17:09:25.834830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 [2024-05-15 17:09:25.834867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa32bb0 with addr=10.0.0.2, port=4420 00:24:47.209 [2024-05-15 17:09:25.834881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa32bb0 is same with the state(5) to be set 00:24:47.209 [2024-05-15 17:09:25.834901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa32bb0 (9): Bad file descriptor 00:24:47.209 [2024-05-15 17:09:25.834930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:47.209 [2024-05-15 17:09:25.834940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:47.209 [2024-05-15 17:09:25.834950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:47.209 [2024-05-15 17:09:25.834976] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.209 [2024-05-15 17:09:25.844110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:47.209 [2024-05-15 17:09:25.844431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 [2024-05-15 17:09:25.844764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 [2024-05-15 17:09:25.844775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa32bb0 with addr=10.0.0.2, port=4420 00:24:47.209 [2024-05-15 17:09:25.844782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa32bb0 is same with the state(5) to be set 00:24:47.209 [2024-05-15 17:09:25.844794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa32bb0 (9): Bad file descriptor 00:24:47.209 [2024-05-15 17:09:25.844811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:47.209 [2024-05-15 17:09:25.844817] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:47.209 [2024-05-15 17:09:25.844825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:47.209 [2024-05-15 17:09:25.844836] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.209 [2024-05-15 17:09:25.854165] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:47.209 [2024-05-15 17:09:25.854488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 [2024-05-15 17:09:25.854804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 [2024-05-15 17:09:25.854814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa32bb0 with addr=10.0.0.2, port=4420 00:24:47.209 [2024-05-15 17:09:25.854821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa32bb0 is same with the state(5) to be set 00:24:47.209 [2024-05-15 17:09:25.854833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa32bb0 (9): Bad file descriptor 00:24:47.209 [2024-05-15 17:09:25.854850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:47.209 [2024-05-15 17:09:25.854857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:47.209 [2024-05-15 17:09:25.854864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:47.209 [2024-05-15 17:09:25.854874] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.209 [2024-05-15 17:09:25.864223] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:47.209 [2024-05-15 17:09:25.864543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 [2024-05-15 17:09:25.864909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.209 [2024-05-15 17:09:25.864919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa32bb0 with addr=10.0.0.2, port=4420 00:24:47.209 [2024-05-15 17:09:25.864926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa32bb0 is same with the state(5) to be set 00:24:47.209 [2024-05-15 17:09:25.864937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa32bb0 (9): Bad file descriptor 00:24:47.209 [2024-05-15 17:09:25.864953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:47.209 [2024-05-15 17:09:25.864960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:47.209 [2024-05-15 17:09:25.864966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:47.209 [2024-05-15 17:09:25.864977] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.209 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.210 [2024-05-15 17:09:25.872984] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:47.210 [2024-05-15 17:09:25.873001] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.210 17:09:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:47.210 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.472 17:09:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.416 [2024-05-15 17:09:27.213747] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:48.416 [2024-05-15 17:09:27.213763] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:48.416 [2024-05-15 17:09:27.213775] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:48.677 [2024-05-15 17:09:27.302044] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:48.677 [2024-05-15 17:09:27.407834] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:48.677 [2024-05-15 17:09:27.407864] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.677 request: 00:24:48.677 { 00:24:48.677 "name": "nvme", 00:24:48.677 "trtype": "tcp", 00:24:48.677 "traddr": "10.0.0.2", 00:24:48.677 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:48.677 "adrfam": "ipv4", 00:24:48.677 "trsvcid": "8009", 00:24:48.677 "wait_for_attach": true, 00:24:48.677 "method": "bdev_nvme_start_discovery", 00:24:48.677 "req_id": 1 00:24:48.677 } 00:24:48.677 Got JSON-RPC error response 00:24:48.677 response: 00:24:48.677 { 00:24:48.677 "code": -17, 00:24:48.677 "message": "File exists" 00:24:48.677 } 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.677 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.938 request: 00:24:48.938 { 00:24:48.938 "name": "nvme_second", 00:24:48.938 "trtype": "tcp", 00:24:48.938 "traddr": "10.0.0.2", 00:24:48.938 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:48.938 "adrfam": "ipv4", 00:24:48.938 "trsvcid": "8009", 00:24:48.938 "wait_for_attach": true, 00:24:48.938 "method": "bdev_nvme_start_discovery", 00:24:48.938 "req_id": 1 00:24:48.938 } 00:24:48.938 Got JSON-RPC error response 00:24:48.938 response: 00:24:48.938 { 00:24:48.938 "code": -17, 00:24:48.938 "message": "File exists" 00:24:48.938 } 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:48.938 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.939 17:09:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.883 [2024-05-15 17:09:28.680772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-15 17:09:28.681071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.883 [2024-05-15 17:09:28.681082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa30830 with addr=10.0.0.2, port=8010 00:24:49.883 [2024-05-15 17:09:28.681094] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:49.883 [2024-05-15 17:09:28.681101] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:49.883 [2024-05-15 17:09:28.681108] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:51.270 [2024-05-15 17:09:29.683104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.270 [2024-05-15 17:09:29.683432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.270 [2024-05-15 17:09:29.683442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa30830 with addr=10.0.0.2, port=8010 00:24:51.270 [2024-05-15 17:09:29.683454] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:51.270 [2024-05-15 17:09:29.683461] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:51.270 [2024-05-15 17:09:29.683468] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:52.214 [2024-05-15 17:09:30.685078] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:52.214 request: 00:24:52.214 { 00:24:52.214 "name": "nvme_second", 00:24:52.214 "trtype": "tcp", 00:24:52.214 "traddr": "10.0.0.2", 00:24:52.214 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:52.214 "adrfam": "ipv4", 00:24:52.214 "trsvcid": "8010", 00:24:52.214 "attach_timeout_ms": 3000, 00:24:52.214 "method": "bdev_nvme_start_discovery", 00:24:52.214 "req_id": 1 00:24:52.214 } 00:24:52.214 Got JSON-RPC error response 00:24:52.214 response: 00:24:52.214 { 00:24:52.214 "code": -110, 00:24:52.214 "message": "Connection timed out" 00:24:52.214 } 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1581014 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.214 rmmod nvme_tcp 00:24:52.214 rmmod nvme_fabrics 00:24:52.214 rmmod nvme_keyring 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:52.214 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1580687 ']' 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1580687 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 1580687 ']' 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 1580687 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1580687 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1580687' 00:24:52.215 killing process with pid 1580687 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 1580687 00:24:52.215 [2024-05-15 17:09:30.871196] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 1580687 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.215 17:09:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:54.763 00:24:54.763 real 0m19.556s 00:24:54.763 user 0m23.112s 00:24:54.763 sys 0m6.648s 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.763 ************************************ 00:24:54.763 END TEST nvmf_host_discovery 00:24:54.763 ************************************ 00:24:54.763 17:09:33 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:54.763 17:09:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:54.763 17:09:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:54.763 17:09:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.763 ************************************ 00:24:54.763 START TEST nvmf_host_multipath_status 00:24:54.763 ************************************ 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:54.763 * Looking for test storage... 00:24:54.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.763 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:54.764 17:09:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:01.356 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:01.357 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:01.357 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:01.357 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:01.357 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:01.357 17:09:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:01.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:25:01.357 00:25:01.357 --- 10.0.0.2 ping statistics --- 00:25:01.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.357 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:25:01.357 00:25:01.357 --- 10.0.0.1 ping statistics --- 00:25:01.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.357 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1586829 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1586829 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1586829 ']' 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:01.357 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:01.618 [2024-05-15 17:09:40.206096] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:25:01.619 [2024-05-15 17:09:40.206162] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.619 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.619 [2024-05-15 17:09:40.278452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:01.619 [2024-05-15 17:09:40.354968] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.619 [2024-05-15 17:09:40.355005] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.619 [2024-05-15 17:09:40.355013] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.619 [2024-05-15 17:09:40.355019] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.619 [2024-05-15 17:09:40.355024] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.619 [2024-05-15 17:09:40.355168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.619 [2024-05-15 17:09:40.355170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.191 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:02.191 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:02.191 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:02.191 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.191 17:09:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:02.191 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.191 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1586829 00:25:02.191 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:02.452 [2024-05-15 17:09:41.151744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.452 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:02.712 Malloc0 00:25:02.712 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:02.712 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:02.973 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.973 [2024-05-15 17:09:41.779019] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:02.973 [2024-05-15 17:09:41.779251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.973 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:03.233 [2024-05-15 17:09:41.931584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:03.233 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:03.233 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1587195 00:25:03.233 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:03.233 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1587195 /var/tmp/bdevperf.sock 00:25:03.233 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1587195 ']' 00:25:03.233 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.234 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:03.234 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.234 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:03.234 17:09:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:04.187 17:09:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:04.187 17:09:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:04.187 17:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:04.187 17:09:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:04.447 Nvme0n1 00:25:04.448 17:09:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:05.091 Nvme0n1 00:25:05.091 17:09:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:05.091 17:09:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:07.046 17:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:07.046 17:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:07.046 17:09:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:07.307 17:09:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:08.250 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:08.250 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:08.250 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.251 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:08.511 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.511 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:08.511 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.511 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:08.773 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.773 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:08.773 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.773 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:08.773 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.773 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:08.773 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.773 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:09.034 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.034 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:09.034 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.034 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:09.034 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.034 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:09.034 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.034 17:09:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:09.295 17:09:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.295 17:09:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:09.295 17:09:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:09.556 17:09:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:09.556 17:09:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:10.941 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:10.941 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:10.941 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.941 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.941 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.941 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:10.942 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.942 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.942 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.942 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.942 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.942 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.202 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.203 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.203 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.203 17:09:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:11.463 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.463 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:11.463 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.463 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.463 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.463 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:11.463 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.463 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:11.724 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.724 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:11.724 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:11.984 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:11.984 17:09:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:12.922 17:09:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:12.922 17:09:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:12.922 17:09:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.922 17:09:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.181 17:09:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.181 17:09:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:13.181 17:09:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.181 17:09:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.441 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.441 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:13.441 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.441 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:13.441 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.441 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:13.441 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:13.441 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.701 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.701 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:13.701 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.701 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:13.963 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.963 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:13.963 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.963 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:13.963 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.963 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:13.963 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:14.225 17:09:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:14.485 17:09:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:15.515 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:15.515 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:15.515 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.515 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:15.515 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.515 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:15.515 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.515 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:15.774 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:15.774 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:15.774 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.774 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:15.774 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.774 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:15.774 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.774 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:16.033 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.033 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:16.033 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.033 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:16.293 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.293 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:16.293 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.293 17:09:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:16.293 17:09:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.293 17:09:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:16.293 17:09:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:16.552 17:09:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:16.811 17:09:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:17.752 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:17.752 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:17.752 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.752 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:17.752 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.752 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:17.752 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.752 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.013 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.013 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.013 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.013 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.274 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.274 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.274 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.274 17:09:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.274 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.274 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:18.274 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.274 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:18.535 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.535 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:18.535 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.535 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:18.796 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.796 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:18.796 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:18.796 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:19.057 17:09:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:20.000 17:09:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:20.000 17:09:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:20.000 17:09:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.000 17:09:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:20.261 17:09:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.261 17:09:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:20.261 17:09:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.261 17:09:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:20.261 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.261 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:20.522 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:20.522 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.522 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.522 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:20.522 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:20.522 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.783 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.783 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:20.783 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.783 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:20.783 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.783 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:20.783 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.783 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:21.045 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.045 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:21.306 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:21.306 17:09:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:21.306 17:10:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:21.566 17:10:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:22.509 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:22.509 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:22.509 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.509 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:22.771 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.771 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:22.771 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.771 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:22.771 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.771 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:22.771 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.771 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:23.032 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.033 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:23.033 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.033 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:23.293 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.293 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:23.293 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.293 17:10:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:23.293 17:10:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.293 17:10:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:23.293 17:10:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.293 17:10:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:23.555 17:10:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.555 17:10:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:23.555 17:10:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:23.816 17:10:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:23.816 17:10:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.203 17:10:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.465 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.465 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:25.465 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.465 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:25.727 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.727 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:25.727 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.727 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:25.727 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.727 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:25.727 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.727 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:25.988 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.988 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:25.988 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:25.988 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:26.247 17:10:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:27.188 17:10:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:27.188 17:10:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:27.188 17:10:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.188 17:10:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:27.449 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.449 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:27.449 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.449 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:27.711 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.711 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:27.711 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.711 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.711 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.711 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.711 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.711 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.973 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.973 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.973 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.973 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.973 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.973 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:27.973 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.973 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:28.233 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.233 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:28.233 17:10:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:28.494 17:10:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:28.494 17:10:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.880 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:30.140 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.140 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:30.140 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.140 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.399 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.399 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.400 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.400 17:10:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.400 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.400 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:30.400 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.400 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1587195 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1587195 ']' 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1587195 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1587195 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1587195' 00:25:30.660 killing process with pid 1587195 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1587195 00:25:30.660 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1587195 00:25:30.660 Connection closed with partial response: 00:25:30.660 00:25:30.660 00:25:30.969 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1587195 00:25:30.969 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:30.969 [2024-05-15 17:09:41.990311] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:25:30.969 [2024-05-15 17:09:41.990369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587195 ] 00:25:30.969 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.969 [2024-05-15 17:09:42.040953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.969 [2024-05-15 17:09:42.092716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.969 Running I/O for 90 seconds... 00:25:30.969 [2024-05-15 17:09:55.211470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.969 [2024-05-15 17:09:55.211503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:30.969 [2024-05-15 17:09:55.211537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.969 [2024-05-15 17:09:55.211544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.969 [2024-05-15 17:09:55.211559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.969 [2024-05-15 17:09:55.211565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:30.969 [2024-05-15 17:09:55.211575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.969 [2024-05-15 17:09:55.211581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:30.969 [2024-05-15 17:09:55.211591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.969 [2024-05-15 17:09:55.211596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.211779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.211784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.970 [2024-05-15 17:09:55.212832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:30.970 [2024-05-15 17:09:55.212874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.212880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.212894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.212899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.212910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.212916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.212928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.212932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.212944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.212950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.212962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.212966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.212979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.212984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.212996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:30.971 [2024-05-15 17:09:55.213349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.971 [2024-05-15 17:09:55.213354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-05-15 17:09:55.213560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-05-15 17:09:55.213577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-05-15 17:09:55.213594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-05-15 17:09:55.213611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-05-15 17:09:55.213628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-05-15 17:09:55.213645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-05-15 17:09:55.213662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.972 [2024-05-15 17:09:55.213920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:30.972 [2024-05-15 17:09:55.213935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.213939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.213954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.213959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.213973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.213978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.213992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.213997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.973 [2024-05-15 17:09:55.214531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.973 [2024-05-15 17:09:55.214555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:30.973 [2024-05-15 17:09:55.214570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:09:55.214576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:09:55.214592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:09:55.214597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:09:55.214613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:09:55.214618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:09:55.214634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:09:55.214639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:09:55.214654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:09:55.214659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:09:55.214675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:09:55.214680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:09:55.214695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:09:55.214700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:09:55.214716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:09:55.214721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.266921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.266956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.266987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.266993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.267009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.267024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.267432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.267453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.267468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.267483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.267498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.267513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.267529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.267550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.974 [2024-05-15 17:10:07.267565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:10:07.267581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:10:07.267596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:10:07.267611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:10:07.267625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:10:07.267642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:10:07.267658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:10:07.267673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.974 [2024-05-15 17:10:07.267689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.974 [2024-05-15 17:10:07.267916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.975 [2024-05-15 17:10:07.267923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:30.975 [2024-05-15 17:10:07.267933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.975 [2024-05-15 17:10:07.267938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:30.975 [2024-05-15 17:10:07.267948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.975 [2024-05-15 17:10:07.267953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:30.975 [2024-05-15 17:10:07.267964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.975 [2024-05-15 17:10:07.267969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:30.975 Received shutdown signal, test time was about 25.601910 seconds 00:25:30.975 00:25:30.975 Latency(us) 00:25:30.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.975 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:30.975 Verification LBA range: start 0x0 length 0x4000 00:25:30.975 Nvme0n1 : 25.60 10881.56 42.51 0.00 0.00 11745.01 428.37 3019898.88 00:25:30.975 =================================================================================================================== 00:25:30.975 Total : 10881.56 42.51 0.00 0.00 11745.01 428.37 3019898.88 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:30.975 rmmod nvme_tcp 00:25:30.975 rmmod nvme_fabrics 00:25:30.975 rmmod nvme_keyring 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1586829 ']' 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1586829 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1586829 ']' 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1586829 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1586829 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1586829' 00:25:30.975 killing process with pid 1586829 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1586829 00:25:30.975 [2024-05-15 17:10:09.795780] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:30.975 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1586829 00:25:31.282 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:31.282 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:31.282 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:31.282 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.282 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.282 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.282 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.282 17:10:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.199 17:10:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.199 00:25:33.199 real 0m38.908s 00:25:33.199 user 1m41.011s 00:25:33.199 sys 0m10.415s 00:25:33.199 17:10:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:33.199 17:10:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:33.199 ************************************ 00:25:33.199 END TEST nvmf_host_multipath_status 00:25:33.199 ************************************ 00:25:33.461 17:10:12 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:33.461 17:10:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:33.461 17:10:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:33.461 17:10:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.461 ************************************ 00:25:33.461 START TEST nvmf_discovery_remove_ifc 00:25:33.461 ************************************ 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:33.461 * Looking for test storage... 00:25:33.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.461 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.462 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.462 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.462 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:33.462 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:33.462 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.462 17:10:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:41.600 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:41.600 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.600 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:41.601 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:41.601 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.601 17:10:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:41.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:25:41.601 00:25:41.601 --- 10.0.0.2 ping statistics --- 00:25:41.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.601 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:25:41.601 00:25:41.601 --- 10.0.0.1 ping statistics --- 00:25:41.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.601 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1596862 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1596862 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1596862 ']' 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:41.601 17:10:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.601 [2024-05-15 17:10:19.298878] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:25:41.601 [2024-05-15 17:10:19.298940] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.601 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.601 [2024-05-15 17:10:19.386092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.601 [2024-05-15 17:10:19.481972] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.601 [2024-05-15 17:10:19.482027] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.601 [2024-05-15 17:10:19.482035] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.601 [2024-05-15 17:10:19.482043] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.601 [2024-05-15 17:10:19.482049] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.601 [2024-05-15 17:10:19.482074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.601 [2024-05-15 17:10:20.145526] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.601 [2024-05-15 17:10:20.153491] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:41.601 [2024-05-15 17:10:20.153810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:41.601 null0 00:25:41.601 [2024-05-15 17:10:20.185723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1596963 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1596963 /tmp/host.sock 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1596963 ']' 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:41.601 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:41.602 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:41.602 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:41.602 17:10:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.602 [2024-05-15 17:10:20.259353] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:25:41.602 [2024-05-15 17:10:20.259413] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596963 ] 00:25:41.602 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.602 [2024-05-15 17:10:20.323111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.602 [2024-05-15 17:10:20.398875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.541 17:10:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.481 [2024-05-15 17:10:22.153756] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:43.481 [2024-05-15 17:10:22.153783] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:43.481 [2024-05-15 17:10:22.153797] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.482 [2024-05-15 17:10:22.240065] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:43.482 [2024-05-15 17:10:22.296392] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:43.482 [2024-05-15 17:10:22.296442] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:43.482 [2024-05-15 17:10:22.296463] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:43.482 [2024-05-15 17:10:22.296477] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:43.482 [2024-05-15 17:10:22.296496] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:43.482 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.482 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:43.482 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:43.482 [2024-05-15 17:10:22.302910] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbeb3d0 was disconnected and freed. delete nvme_qpair. 00:25:43.482 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.482 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:43.482 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.482 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:43.482 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.482 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:43.742 17:10:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:45.125 17:10:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:45.125 17:10:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.125 17:10:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:45.125 17:10:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.125 17:10:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:45.125 17:10:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.125 17:10:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:45.125 17:10:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.125 17:10:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:45.125 17:10:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:46.067 17:10:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:46.067 17:10:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.067 17:10:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:46.067 17:10:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.067 17:10:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.067 17:10:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:46.067 17:10:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:46.067 17:10:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.067 17:10:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:46.067 17:10:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.009 17:10:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:47.009 17:10:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.009 17:10:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:47.009 17:10:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.009 17:10:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:47.009 17:10:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.009 17:10:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:47.009 17:10:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.009 17:10:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:47.009 17:10:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.951 17:10:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:47.951 17:10:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.951 17:10:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:47.951 17:10:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.951 17:10:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.951 17:10:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:47.951 17:10:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:47.951 17:10:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.951 17:10:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:47.951 17:10:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:49.336 [2024-05-15 17:10:27.736956] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:49.336 [2024-05-15 17:10:27.736999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.336 [2024-05-15 17:10:27.737010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.336 [2024-05-15 17:10:27.737020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.336 [2024-05-15 17:10:27.737028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.336 [2024-05-15 17:10:27.737036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.336 [2024-05-15 17:10:27.737043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.336 [2024-05-15 17:10:27.737050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.336 [2024-05-15 17:10:27.737057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.336 [2024-05-15 17:10:27.737066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.336 [2024-05-15 17:10:27.737073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.336 [2024-05-15 17:10:27.737080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb2790 is same with the state(5) to be set 00:25:49.336 [2024-05-15 17:10:27.746975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb2790 (9): Bad file descriptor 00:25:49.336 17:10:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.336 [2024-05-15 17:10:27.757016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:49.336 17:10:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.336 17:10:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.336 17:10:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.336 17:10:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.336 17:10:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.336 17:10:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:50.278 [2024-05-15 17:10:28.761571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:51.220 [2024-05-15 17:10:29.785590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:51.220 [2024-05-15 17:10:29.785635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb2790 with addr=10.0.0.2, port=4420 00:25:51.220 [2024-05-15 17:10:29.785651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb2790 is same with the state(5) to be set 00:25:51.220 [2024-05-15 17:10:29.786025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb2790 (9): Bad file descriptor 00:25:51.220 [2024-05-15 17:10:29.786049] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.220 [2024-05-15 17:10:29.786068] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:51.220 [2024-05-15 17:10:29.786098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.220 [2024-05-15 17:10:29.786109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.220 [2024-05-15 17:10:29.786119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.220 [2024-05-15 17:10:29.786127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.220 [2024-05-15 17:10:29.786135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.220 [2024-05-15 17:10:29.786142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.220 [2024-05-15 17:10:29.786150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.220 [2024-05-15 17:10:29.786157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.220 [2024-05-15 17:10:29.786166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.220 [2024-05-15 17:10:29.786173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.220 [2024-05-15 17:10:29.786180] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:51.220 [2024-05-15 17:10:29.786677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1c20 (9): Bad file descriptor 00:25:51.220 [2024-05-15 17:10:29.787690] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:51.220 [2024-05-15 17:10:29.787702] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:51.220 17:10:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.220 17:10:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:51.220 17:10:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.163 17:10:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.424 17:10:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:52.424 17:10:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:53.367 [2024-05-15 17:10:31.839661] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:53.367 [2024-05-15 17:10:31.839683] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:53.367 [2024-05-15 17:10:31.839696] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:53.367 [2024-05-15 17:10:31.966123] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:53.367 17:10:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:53.367 17:10:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.367 17:10:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:53.367 17:10:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.367 17:10:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.367 17:10:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:53.367 17:10:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:53.367 17:10:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.367 17:10:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:53.367 17:10:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:53.367 [2024-05-15 17:10:32.191394] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:53.367 [2024-05-15 17:10:32.191437] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:53.367 [2024-05-15 17:10:32.191457] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:53.367 [2024-05-15 17:10:32.191471] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:53.367 [2024-05-15 17:10:32.191479] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:53.367 [2024-05-15 17:10:32.197445] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbf36f0 was disconnected and freed. delete nvme_qpair. 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1596963 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1596963 ']' 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1596963 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:54.310 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1596963 00:25:54.570 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:54.570 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:54.570 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1596963' 00:25:54.570 killing process with pid 1596963 00:25:54.570 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1596963 00:25:54.570 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1596963 00:25:54.570 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:54.571 rmmod nvme_tcp 00:25:54.571 rmmod nvme_fabrics 00:25:54.571 rmmod nvme_keyring 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1596862 ']' 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1596862 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1596862 ']' 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1596862 00:25:54.571 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1596862 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1596862' 00:25:54.831 killing process with pid 1596862 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1596862 00:25:54.831 [2024-05-15 17:10:33.454908] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1596862 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.831 17:10:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.377 17:10:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:57.377 00:25:57.377 real 0m23.575s 00:25:57.377 user 0m27.891s 00:25:57.377 sys 0m6.450s 00:25:57.377 17:10:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:57.377 17:10:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.377 ************************************ 00:25:57.377 END TEST nvmf_discovery_remove_ifc 00:25:57.377 ************************************ 00:25:57.377 17:10:35 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:57.377 17:10:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:57.377 17:10:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:57.377 17:10:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:57.377 ************************************ 00:25:57.377 START TEST nvmf_identify_kernel_target 00:25:57.377 ************************************ 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:57.377 * Looking for test storage... 00:25:57.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.377 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.378 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.378 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:57.378 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:57.378 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:57.378 17:10:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:03.963 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:03.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:03.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:03.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:03.964 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.964 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.226 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.226 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.226 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:04.226 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.226 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.226 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.226 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:04.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:26:04.226 00:26:04.226 --- 10.0.0.2 ping statistics --- 00:26:04.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.226 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:26:04.226 17:10:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:26:04.226 00:26:04.226 --- 10.0.0.1 ping statistics --- 00:26:04.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.226 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:04.226 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:04.488 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:04.488 17:10:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:07.793 Waiting for block devices as requested 00:26:07.793 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:07.793 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:07.793 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:07.793 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:08.054 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:08.054 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:08.054 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:08.315 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:08.315 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:08.642 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:08.642 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:08.642 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:08.642 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:08.642 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:08.904 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:08.904 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:08.904 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:09.165 No valid GPT data, bailing 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:09.165 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:09.428 17:10:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:09.428 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:09.428 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:09.428 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:09.428 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:09.428 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:09.428 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:09.428 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:09.428 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:09.428 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:09.428 00:26:09.428 Discovery Log Number of Records 2, Generation counter 2 00:26:09.428 =====Discovery Log Entry 0====== 00:26:09.428 trtype: tcp 00:26:09.428 adrfam: ipv4 00:26:09.428 subtype: current discovery subsystem 00:26:09.428 treq: not specified, sq flow control disable supported 00:26:09.428 portid: 1 00:26:09.428 trsvcid: 4420 00:26:09.428 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:09.428 traddr: 10.0.0.1 00:26:09.428 eflags: none 00:26:09.428 sectype: none 00:26:09.428 =====Discovery Log Entry 1====== 00:26:09.428 trtype: tcp 00:26:09.428 adrfam: ipv4 00:26:09.428 subtype: nvme subsystem 00:26:09.428 treq: not specified, sq flow control disable supported 00:26:09.428 portid: 1 00:26:09.428 trsvcid: 4420 00:26:09.428 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:09.428 traddr: 10.0.0.1 00:26:09.428 eflags: none 00:26:09.428 sectype: none 00:26:09.428 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:09.428 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:09.429 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.429 ===================================================== 00:26:09.429 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:09.429 ===================================================== 00:26:09.429 Controller Capabilities/Features 00:26:09.429 ================================ 00:26:09.429 Vendor ID: 0000 00:26:09.429 Subsystem Vendor ID: 0000 00:26:09.429 Serial Number: d3e3c5a573397b641ea9 00:26:09.429 Model Number: Linux 00:26:09.429 Firmware Version: 6.7.0-68 00:26:09.429 Recommended Arb Burst: 0 00:26:09.429 IEEE OUI Identifier: 00 00 00 00:26:09.429 Multi-path I/O 00:26:09.429 May have multiple subsystem ports: No 00:26:09.429 May have multiple controllers: No 00:26:09.429 Associated with SR-IOV VF: No 00:26:09.429 Max Data Transfer Size: Unlimited 00:26:09.429 Max Number of Namespaces: 0 00:26:09.429 Max Number of I/O Queues: 1024 00:26:09.429 NVMe Specification Version (VS): 1.3 00:26:09.429 NVMe Specification Version (Identify): 1.3 00:26:09.429 Maximum Queue Entries: 1024 00:26:09.429 Contiguous Queues Required: No 00:26:09.429 Arbitration Mechanisms Supported 00:26:09.429 Weighted Round Robin: Not Supported 00:26:09.429 Vendor Specific: Not Supported 00:26:09.429 Reset Timeout: 7500 ms 00:26:09.429 Doorbell Stride: 4 bytes 00:26:09.429 NVM Subsystem Reset: Not Supported 00:26:09.429 Command Sets Supported 00:26:09.429 NVM Command Set: Supported 00:26:09.429 Boot Partition: Not Supported 00:26:09.429 Memory Page Size Minimum: 4096 bytes 00:26:09.429 Memory Page Size Maximum: 4096 bytes 00:26:09.429 Persistent Memory Region: Not Supported 00:26:09.429 Optional Asynchronous Events Supported 00:26:09.429 Namespace Attribute Notices: Not Supported 00:26:09.429 Firmware Activation Notices: Not Supported 00:26:09.429 ANA Change Notices: Not Supported 00:26:09.429 PLE Aggregate Log Change Notices: Not Supported 00:26:09.429 LBA Status Info Alert Notices: Not Supported 00:26:09.429 EGE Aggregate Log Change Notices: Not Supported 00:26:09.429 Normal NVM Subsystem Shutdown event: Not Supported 00:26:09.429 Zone Descriptor Change Notices: Not Supported 00:26:09.429 Discovery Log Change Notices: Supported 00:26:09.429 Controller Attributes 00:26:09.429 128-bit Host Identifier: Not Supported 00:26:09.429 Non-Operational Permissive Mode: Not Supported 00:26:09.429 NVM Sets: Not Supported 00:26:09.429 Read Recovery Levels: Not Supported 00:26:09.429 Endurance Groups: Not Supported 00:26:09.429 Predictable Latency Mode: Not Supported 00:26:09.429 Traffic Based Keep ALive: Not Supported 00:26:09.429 Namespace Granularity: Not Supported 00:26:09.429 SQ Associations: Not Supported 00:26:09.429 UUID List: Not Supported 00:26:09.429 Multi-Domain Subsystem: Not Supported 00:26:09.429 Fixed Capacity Management: Not Supported 00:26:09.429 Variable Capacity Management: Not Supported 00:26:09.429 Delete Endurance Group: Not Supported 00:26:09.429 Delete NVM Set: Not Supported 00:26:09.429 Extended LBA Formats Supported: Not Supported 00:26:09.429 Flexible Data Placement Supported: Not Supported 00:26:09.429 00:26:09.429 Controller Memory Buffer Support 00:26:09.429 ================================ 00:26:09.429 Supported: No 00:26:09.429 00:26:09.429 Persistent Memory Region Support 00:26:09.429 ================================ 00:26:09.429 Supported: No 00:26:09.429 00:26:09.429 Admin Command Set Attributes 00:26:09.429 ============================ 00:26:09.429 Security Send/Receive: Not Supported 00:26:09.429 Format NVM: Not Supported 00:26:09.429 Firmware Activate/Download: Not Supported 00:26:09.429 Namespace Management: Not Supported 00:26:09.429 Device Self-Test: Not Supported 00:26:09.429 Directives: Not Supported 00:26:09.429 NVMe-MI: Not Supported 00:26:09.429 Virtualization Management: Not Supported 00:26:09.429 Doorbell Buffer Config: Not Supported 00:26:09.429 Get LBA Status Capability: Not Supported 00:26:09.429 Command & Feature Lockdown Capability: Not Supported 00:26:09.429 Abort Command Limit: 1 00:26:09.429 Async Event Request Limit: 1 00:26:09.429 Number of Firmware Slots: N/A 00:26:09.429 Firmware Slot 1 Read-Only: N/A 00:26:09.429 Firmware Activation Without Reset: N/A 00:26:09.429 Multiple Update Detection Support: N/A 00:26:09.429 Firmware Update Granularity: No Information Provided 00:26:09.429 Per-Namespace SMART Log: No 00:26:09.429 Asymmetric Namespace Access Log Page: Not Supported 00:26:09.429 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:09.429 Command Effects Log Page: Not Supported 00:26:09.429 Get Log Page Extended Data: Supported 00:26:09.429 Telemetry Log Pages: Not Supported 00:26:09.429 Persistent Event Log Pages: Not Supported 00:26:09.429 Supported Log Pages Log Page: May Support 00:26:09.429 Commands Supported & Effects Log Page: Not Supported 00:26:09.429 Feature Identifiers & Effects Log Page:May Support 00:26:09.429 NVMe-MI Commands & Effects Log Page: May Support 00:26:09.429 Data Area 4 for Telemetry Log: Not Supported 00:26:09.429 Error Log Page Entries Supported: 1 00:26:09.429 Keep Alive: Not Supported 00:26:09.429 00:26:09.429 NVM Command Set Attributes 00:26:09.429 ========================== 00:26:09.429 Submission Queue Entry Size 00:26:09.429 Max: 1 00:26:09.429 Min: 1 00:26:09.429 Completion Queue Entry Size 00:26:09.429 Max: 1 00:26:09.429 Min: 1 00:26:09.429 Number of Namespaces: 0 00:26:09.429 Compare Command: Not Supported 00:26:09.429 Write Uncorrectable Command: Not Supported 00:26:09.429 Dataset Management Command: Not Supported 00:26:09.429 Write Zeroes Command: Not Supported 00:26:09.429 Set Features Save Field: Not Supported 00:26:09.429 Reservations: Not Supported 00:26:09.429 Timestamp: Not Supported 00:26:09.429 Copy: Not Supported 00:26:09.429 Volatile Write Cache: Not Present 00:26:09.429 Atomic Write Unit (Normal): 1 00:26:09.429 Atomic Write Unit (PFail): 1 00:26:09.429 Atomic Compare & Write Unit: 1 00:26:09.429 Fused Compare & Write: Not Supported 00:26:09.429 Scatter-Gather List 00:26:09.429 SGL Command Set: Supported 00:26:09.429 SGL Keyed: Not Supported 00:26:09.429 SGL Bit Bucket Descriptor: Not Supported 00:26:09.429 SGL Metadata Pointer: Not Supported 00:26:09.429 Oversized SGL: Not Supported 00:26:09.429 SGL Metadata Address: Not Supported 00:26:09.429 SGL Offset: Supported 00:26:09.429 Transport SGL Data Block: Not Supported 00:26:09.429 Replay Protected Memory Block: Not Supported 00:26:09.429 00:26:09.429 Firmware Slot Information 00:26:09.429 ========================= 00:26:09.429 Active slot: 0 00:26:09.429 00:26:09.429 00:26:09.429 Error Log 00:26:09.429 ========= 00:26:09.429 00:26:09.429 Active Namespaces 00:26:09.429 ================= 00:26:09.429 Discovery Log Page 00:26:09.429 ================== 00:26:09.429 Generation Counter: 2 00:26:09.429 Number of Records: 2 00:26:09.429 Record Format: 0 00:26:09.429 00:26:09.429 Discovery Log Entry 0 00:26:09.429 ---------------------- 00:26:09.429 Transport Type: 3 (TCP) 00:26:09.429 Address Family: 1 (IPv4) 00:26:09.429 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:09.429 Entry Flags: 00:26:09.429 Duplicate Returned Information: 0 00:26:09.429 Explicit Persistent Connection Support for Discovery: 0 00:26:09.429 Transport Requirements: 00:26:09.429 Secure Channel: Not Specified 00:26:09.429 Port ID: 1 (0x0001) 00:26:09.429 Controller ID: 65535 (0xffff) 00:26:09.429 Admin Max SQ Size: 32 00:26:09.429 Transport Service Identifier: 4420 00:26:09.429 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:09.429 Transport Address: 10.0.0.1 00:26:09.429 Discovery Log Entry 1 00:26:09.429 ---------------------- 00:26:09.429 Transport Type: 3 (TCP) 00:26:09.429 Address Family: 1 (IPv4) 00:26:09.429 Subsystem Type: 2 (NVM Subsystem) 00:26:09.429 Entry Flags: 00:26:09.429 Duplicate Returned Information: 0 00:26:09.429 Explicit Persistent Connection Support for Discovery: 0 00:26:09.429 Transport Requirements: 00:26:09.429 Secure Channel: Not Specified 00:26:09.429 Port ID: 1 (0x0001) 00:26:09.429 Controller ID: 65535 (0xffff) 00:26:09.429 Admin Max SQ Size: 32 00:26:09.429 Transport Service Identifier: 4420 00:26:09.429 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:09.429 Transport Address: 10.0.0.1 00:26:09.429 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:09.429 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.429 get_feature(0x01) failed 00:26:09.429 get_feature(0x02) failed 00:26:09.429 get_feature(0x04) failed 00:26:09.429 ===================================================== 00:26:09.429 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:09.429 ===================================================== 00:26:09.429 Controller Capabilities/Features 00:26:09.429 ================================ 00:26:09.429 Vendor ID: 0000 00:26:09.429 Subsystem Vendor ID: 0000 00:26:09.429 Serial Number: 930b5f4ff61b1e546c00 00:26:09.430 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:09.430 Firmware Version: 6.7.0-68 00:26:09.430 Recommended Arb Burst: 6 00:26:09.430 IEEE OUI Identifier: 00 00 00 00:26:09.430 Multi-path I/O 00:26:09.430 May have multiple subsystem ports: Yes 00:26:09.430 May have multiple controllers: Yes 00:26:09.430 Associated with SR-IOV VF: No 00:26:09.430 Max Data Transfer Size: Unlimited 00:26:09.430 Max Number of Namespaces: 1024 00:26:09.430 Max Number of I/O Queues: 128 00:26:09.430 NVMe Specification Version (VS): 1.3 00:26:09.430 NVMe Specification Version (Identify): 1.3 00:26:09.430 Maximum Queue Entries: 1024 00:26:09.430 Contiguous Queues Required: No 00:26:09.430 Arbitration Mechanisms Supported 00:26:09.430 Weighted Round Robin: Not Supported 00:26:09.430 Vendor Specific: Not Supported 00:26:09.430 Reset Timeout: 7500 ms 00:26:09.430 Doorbell Stride: 4 bytes 00:26:09.430 NVM Subsystem Reset: Not Supported 00:26:09.430 Command Sets Supported 00:26:09.430 NVM Command Set: Supported 00:26:09.430 Boot Partition: Not Supported 00:26:09.430 Memory Page Size Minimum: 4096 bytes 00:26:09.430 Memory Page Size Maximum: 4096 bytes 00:26:09.430 Persistent Memory Region: Not Supported 00:26:09.430 Optional Asynchronous Events Supported 00:26:09.430 Namespace Attribute Notices: Supported 00:26:09.430 Firmware Activation Notices: Not Supported 00:26:09.430 ANA Change Notices: Supported 00:26:09.430 PLE Aggregate Log Change Notices: Not Supported 00:26:09.430 LBA Status Info Alert Notices: Not Supported 00:26:09.430 EGE Aggregate Log Change Notices: Not Supported 00:26:09.430 Normal NVM Subsystem Shutdown event: Not Supported 00:26:09.430 Zone Descriptor Change Notices: Not Supported 00:26:09.430 Discovery Log Change Notices: Not Supported 00:26:09.430 Controller Attributes 00:26:09.430 128-bit Host Identifier: Supported 00:26:09.430 Non-Operational Permissive Mode: Not Supported 00:26:09.430 NVM Sets: Not Supported 00:26:09.430 Read Recovery Levels: Not Supported 00:26:09.430 Endurance Groups: Not Supported 00:26:09.430 Predictable Latency Mode: Not Supported 00:26:09.430 Traffic Based Keep ALive: Supported 00:26:09.430 Namespace Granularity: Not Supported 00:26:09.430 SQ Associations: Not Supported 00:26:09.430 UUID List: Not Supported 00:26:09.430 Multi-Domain Subsystem: Not Supported 00:26:09.430 Fixed Capacity Management: Not Supported 00:26:09.430 Variable Capacity Management: Not Supported 00:26:09.430 Delete Endurance Group: Not Supported 00:26:09.430 Delete NVM Set: Not Supported 00:26:09.430 Extended LBA Formats Supported: Not Supported 00:26:09.430 Flexible Data Placement Supported: Not Supported 00:26:09.430 00:26:09.430 Controller Memory Buffer Support 00:26:09.430 ================================ 00:26:09.430 Supported: No 00:26:09.430 00:26:09.430 Persistent Memory Region Support 00:26:09.430 ================================ 00:26:09.430 Supported: No 00:26:09.430 00:26:09.430 Admin Command Set Attributes 00:26:09.430 ============================ 00:26:09.430 Security Send/Receive: Not Supported 00:26:09.430 Format NVM: Not Supported 00:26:09.430 Firmware Activate/Download: Not Supported 00:26:09.430 Namespace Management: Not Supported 00:26:09.430 Device Self-Test: Not Supported 00:26:09.430 Directives: Not Supported 00:26:09.430 NVMe-MI: Not Supported 00:26:09.430 Virtualization Management: Not Supported 00:26:09.430 Doorbell Buffer Config: Not Supported 00:26:09.430 Get LBA Status Capability: Not Supported 00:26:09.430 Command & Feature Lockdown Capability: Not Supported 00:26:09.430 Abort Command Limit: 4 00:26:09.430 Async Event Request Limit: 4 00:26:09.430 Number of Firmware Slots: N/A 00:26:09.430 Firmware Slot 1 Read-Only: N/A 00:26:09.430 Firmware Activation Without Reset: N/A 00:26:09.430 Multiple Update Detection Support: N/A 00:26:09.430 Firmware Update Granularity: No Information Provided 00:26:09.430 Per-Namespace SMART Log: Yes 00:26:09.430 Asymmetric Namespace Access Log Page: Supported 00:26:09.430 ANA Transition Time : 10 sec 00:26:09.430 00:26:09.430 Asymmetric Namespace Access Capabilities 00:26:09.430 ANA Optimized State : Supported 00:26:09.430 ANA Non-Optimized State : Supported 00:26:09.430 ANA Inaccessible State : Supported 00:26:09.430 ANA Persistent Loss State : Supported 00:26:09.430 ANA Change State : Supported 00:26:09.430 ANAGRPID is not changed : No 00:26:09.430 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:09.430 00:26:09.430 ANA Group Identifier Maximum : 128 00:26:09.430 Number of ANA Group Identifiers : 128 00:26:09.430 Max Number of Allowed Namespaces : 1024 00:26:09.430 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:09.430 Command Effects Log Page: Supported 00:26:09.430 Get Log Page Extended Data: Supported 00:26:09.430 Telemetry Log Pages: Not Supported 00:26:09.430 Persistent Event Log Pages: Not Supported 00:26:09.430 Supported Log Pages Log Page: May Support 00:26:09.430 Commands Supported & Effects Log Page: Not Supported 00:26:09.430 Feature Identifiers & Effects Log Page:May Support 00:26:09.430 NVMe-MI Commands & Effects Log Page: May Support 00:26:09.430 Data Area 4 for Telemetry Log: Not Supported 00:26:09.430 Error Log Page Entries Supported: 128 00:26:09.430 Keep Alive: Supported 00:26:09.430 Keep Alive Granularity: 1000 ms 00:26:09.430 00:26:09.430 NVM Command Set Attributes 00:26:09.430 ========================== 00:26:09.430 Submission Queue Entry Size 00:26:09.430 Max: 64 00:26:09.430 Min: 64 00:26:09.430 Completion Queue Entry Size 00:26:09.430 Max: 16 00:26:09.430 Min: 16 00:26:09.430 Number of Namespaces: 1024 00:26:09.430 Compare Command: Not Supported 00:26:09.430 Write Uncorrectable Command: Not Supported 00:26:09.430 Dataset Management Command: Supported 00:26:09.430 Write Zeroes Command: Supported 00:26:09.430 Set Features Save Field: Not Supported 00:26:09.430 Reservations: Not Supported 00:26:09.430 Timestamp: Not Supported 00:26:09.430 Copy: Not Supported 00:26:09.430 Volatile Write Cache: Present 00:26:09.430 Atomic Write Unit (Normal): 1 00:26:09.430 Atomic Write Unit (PFail): 1 00:26:09.430 Atomic Compare & Write Unit: 1 00:26:09.430 Fused Compare & Write: Not Supported 00:26:09.430 Scatter-Gather List 00:26:09.430 SGL Command Set: Supported 00:26:09.430 SGL Keyed: Not Supported 00:26:09.430 SGL Bit Bucket Descriptor: Not Supported 00:26:09.430 SGL Metadata Pointer: Not Supported 00:26:09.430 Oversized SGL: Not Supported 00:26:09.430 SGL Metadata Address: Not Supported 00:26:09.430 SGL Offset: Supported 00:26:09.430 Transport SGL Data Block: Not Supported 00:26:09.430 Replay Protected Memory Block: Not Supported 00:26:09.430 00:26:09.430 Firmware Slot Information 00:26:09.430 ========================= 00:26:09.430 Active slot: 0 00:26:09.430 00:26:09.430 Asymmetric Namespace Access 00:26:09.430 =========================== 00:26:09.430 Change Count : 0 00:26:09.430 Number of ANA Group Descriptors : 1 00:26:09.430 ANA Group Descriptor : 0 00:26:09.430 ANA Group ID : 1 00:26:09.430 Number of NSID Values : 1 00:26:09.430 Change Count : 0 00:26:09.430 ANA State : 1 00:26:09.430 Namespace Identifier : 1 00:26:09.430 00:26:09.430 Commands Supported and Effects 00:26:09.430 ============================== 00:26:09.430 Admin Commands 00:26:09.430 -------------- 00:26:09.430 Get Log Page (02h): Supported 00:26:09.430 Identify (06h): Supported 00:26:09.430 Abort (08h): Supported 00:26:09.430 Set Features (09h): Supported 00:26:09.430 Get Features (0Ah): Supported 00:26:09.430 Asynchronous Event Request (0Ch): Supported 00:26:09.430 Keep Alive (18h): Supported 00:26:09.430 I/O Commands 00:26:09.430 ------------ 00:26:09.430 Flush (00h): Supported 00:26:09.430 Write (01h): Supported LBA-Change 00:26:09.430 Read (02h): Supported 00:26:09.430 Write Zeroes (08h): Supported LBA-Change 00:26:09.430 Dataset Management (09h): Supported 00:26:09.430 00:26:09.430 Error Log 00:26:09.430 ========= 00:26:09.430 Entry: 0 00:26:09.430 Error Count: 0x3 00:26:09.430 Submission Queue Id: 0x0 00:26:09.430 Command Id: 0x5 00:26:09.430 Phase Bit: 0 00:26:09.430 Status Code: 0x2 00:26:09.430 Status Code Type: 0x0 00:26:09.430 Do Not Retry: 1 00:26:09.430 Error Location: 0x28 00:26:09.430 LBA: 0x0 00:26:09.430 Namespace: 0x0 00:26:09.430 Vendor Log Page: 0x0 00:26:09.430 ----------- 00:26:09.430 Entry: 1 00:26:09.430 Error Count: 0x2 00:26:09.430 Submission Queue Id: 0x0 00:26:09.430 Command Id: 0x5 00:26:09.430 Phase Bit: 0 00:26:09.430 Status Code: 0x2 00:26:09.430 Status Code Type: 0x0 00:26:09.430 Do Not Retry: 1 00:26:09.430 Error Location: 0x28 00:26:09.430 LBA: 0x0 00:26:09.430 Namespace: 0x0 00:26:09.430 Vendor Log Page: 0x0 00:26:09.430 ----------- 00:26:09.430 Entry: 2 00:26:09.430 Error Count: 0x1 00:26:09.430 Submission Queue Id: 0x0 00:26:09.430 Command Id: 0x4 00:26:09.430 Phase Bit: 0 00:26:09.430 Status Code: 0x2 00:26:09.430 Status Code Type: 0x0 00:26:09.430 Do Not Retry: 1 00:26:09.430 Error Location: 0x28 00:26:09.430 LBA: 0x0 00:26:09.431 Namespace: 0x0 00:26:09.431 Vendor Log Page: 0x0 00:26:09.431 00:26:09.431 Number of Queues 00:26:09.431 ================ 00:26:09.431 Number of I/O Submission Queues: 128 00:26:09.431 Number of I/O Completion Queues: 128 00:26:09.431 00:26:09.431 ZNS Specific Controller Data 00:26:09.431 ============================ 00:26:09.431 Zone Append Size Limit: 0 00:26:09.431 00:26:09.431 00:26:09.431 Active Namespaces 00:26:09.431 ================= 00:26:09.431 get_feature(0x05) failed 00:26:09.431 Namespace ID:1 00:26:09.431 Command Set Identifier: NVM (00h) 00:26:09.431 Deallocate: Supported 00:26:09.431 Deallocated/Unwritten Error: Not Supported 00:26:09.431 Deallocated Read Value: Unknown 00:26:09.431 Deallocate in Write Zeroes: Not Supported 00:26:09.431 Deallocated Guard Field: 0xFFFF 00:26:09.431 Flush: Supported 00:26:09.431 Reservation: Not Supported 00:26:09.431 Namespace Sharing Capabilities: Multiple Controllers 00:26:09.431 Size (in LBAs): 3750748848 (1788GiB) 00:26:09.431 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:09.431 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:09.431 UUID: 971a50ed-700d-40e7-b429-78d9579200a0 00:26:09.431 Thin Provisioning: Not Supported 00:26:09.431 Per-NS Atomic Units: Yes 00:26:09.431 Atomic Write Unit (Normal): 8 00:26:09.431 Atomic Write Unit (PFail): 8 00:26:09.431 Preferred Write Granularity: 8 00:26:09.431 Atomic Compare & Write Unit: 8 00:26:09.431 Atomic Boundary Size (Normal): 0 00:26:09.431 Atomic Boundary Size (PFail): 0 00:26:09.431 Atomic Boundary Offset: 0 00:26:09.431 NGUID/EUI64 Never Reused: No 00:26:09.431 ANA group ID: 1 00:26:09.431 Namespace Write Protected: No 00:26:09.431 Number of LBA Formats: 1 00:26:09.431 Current LBA Format: LBA Format #00 00:26:09.431 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:09.431 00:26:09.431 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:09.431 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:09.431 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:09.431 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:09.431 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:09.431 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:09.431 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:09.431 rmmod nvme_tcp 00:26:09.692 rmmod nvme_fabrics 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.692 17:10:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.605 17:10:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:11.605 17:10:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:11.605 17:10:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:11.605 17:10:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:11.605 17:10:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:11.605 17:10:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:11.605 17:10:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:11.605 17:10:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:11.605 17:10:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:11.605 17:10:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:11.866 17:10:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:15.169 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:15.169 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:15.429 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:15.429 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:15.429 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:15.429 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:15.689 00:26:15.689 real 0m18.697s 00:26:15.689 user 0m5.022s 00:26:15.689 sys 0m10.675s 00:26:15.689 17:10:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:15.689 17:10:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:15.689 ************************************ 00:26:15.689 END TEST nvmf_identify_kernel_target 00:26:15.689 ************************************ 00:26:15.689 17:10:54 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:15.689 17:10:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:15.689 17:10:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:15.689 17:10:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:15.689 ************************************ 00:26:15.689 START TEST nvmf_auth_host 00:26:15.689 ************************************ 00:26:15.689 17:10:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:15.950 * Looking for test storage... 00:26:15.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:15.950 17:10:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:22.537 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.537 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:22.538 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:22.538 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:22.538 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:22.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:26:22.538 00:26:22.538 --- 10.0.0.2 ping statistics --- 00:26:22.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.538 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:26:22.538 00:26:22.538 --- 10.0.0.1 ping statistics --- 00:26:22.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.538 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:22.538 17:11:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.799 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1611084 00:26:22.799 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1611084 00:26:22.799 17:11:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1611084 ']' 00:26:22.799 17:11:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.799 17:11:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:22.799 17:11:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.799 17:11:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:22.799 17:11:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.799 17:11:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:23.370 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:23.370 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:23.370 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:23.370 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:23.370 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af1569ae16f23a666256b8dbb20db6e6 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Vwk 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af1569ae16f23a666256b8dbb20db6e6 0 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af1569ae16f23a666256b8dbb20db6e6 0 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af1569ae16f23a666256b8dbb20db6e6 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Vwk 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Vwk 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Vwk 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=187e8cf6fff3e0fb957984894b6d6a97e5eb6cc363600edb5d66bd4039255e16 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Vca 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 187e8cf6fff3e0fb957984894b6d6a97e5eb6cc363600edb5d66bd4039255e16 3 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 187e8cf6fff3e0fb957984894b6d6a97e5eb6cc363600edb5d66bd4039255e16 3 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=187e8cf6fff3e0fb957984894b6d6a97e5eb6cc363600edb5d66bd4039255e16 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Vca 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Vca 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Vca 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ababc890c2d272816f74ee73ca922f1f157db0d9b67b9c03 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.IsX 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ababc890c2d272816f74ee73ca922f1f157db0d9b67b9c03 0 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ababc890c2d272816f74ee73ca922f1f157db0d9b67b9c03 0 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ababc890c2d272816f74ee73ca922f1f157db0d9b67b9c03 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.IsX 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.IsX 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.IsX 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fe7050833e9b1a772e4d36bfdb3eaf2b972e36b83ab3d609 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CCG 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fe7050833e9b1a772e4d36bfdb3eaf2b972e36b83ab3d609 2 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fe7050833e9b1a772e4d36bfdb3eaf2b972e36b83ab3d609 2 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fe7050833e9b1a772e4d36bfdb3eaf2b972e36b83ab3d609 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:23.631 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CCG 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CCG 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.CCG 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2c0f0de83828b555abee9cfb0a138f74 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5KD 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2c0f0de83828b555abee9cfb0a138f74 1 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2c0f0de83828b555abee9cfb0a138f74 1 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2c0f0de83828b555abee9cfb0a138f74 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5KD 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5KD 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.5KD 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=37e67cbf4a2abe9bc9fe11b65e4c1f39 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8qC 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 37e67cbf4a2abe9bc9fe11b65e4c1f39 1 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 37e67cbf4a2abe9bc9fe11b65e4c1f39 1 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=37e67cbf4a2abe9bc9fe11b65e4c1f39 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8qC 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8qC 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.8qC 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f620b9fbb00296bcee97ca14b06f3a569b8704105a6e6ef9 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.syS 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f620b9fbb00296bcee97ca14b06f3a569b8704105a6e6ef9 2 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f620b9fbb00296bcee97ca14b06f3a569b8704105a6e6ef9 2 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f620b9fbb00296bcee97ca14b06f3a569b8704105a6e6ef9 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.syS 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.syS 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.syS 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:23.892 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:23.893 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:23.893 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=11e68c81e3df423d6f5fcfabdb16a10f 00:26:23.893 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:23.893 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ujh 00:26:23.893 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 11e68c81e3df423d6f5fcfabdb16a10f 0 00:26:23.893 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 11e68c81e3df423d6f5fcfabdb16a10f 0 00:26:23.893 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:23.893 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:23.893 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=11e68c81e3df423d6f5fcfabdb16a10f 00:26:23.893 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:23.893 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ujh 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ujh 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ujh 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=64bfa463331c209859a61e479100efb9b78644bdc617bfc265cb3b875b20bb97 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XqU 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 64bfa463331c209859a61e479100efb9b78644bdc617bfc265cb3b875b20bb97 3 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 64bfa463331c209859a61e479100efb9b78644bdc617bfc265cb3b875b20bb97 3 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=64bfa463331c209859a61e479100efb9b78644bdc617bfc265cb3b875b20bb97 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XqU 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XqU 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.XqU 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1611084 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1611084 ']' 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Vwk 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Vca ]] 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vca 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.IsX 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.160 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.429 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.429 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.CCG ]] 00:26:24.429 17:11:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CCG 00:26:24.429 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.429 17:11:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.5KD 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.8qC ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8qC 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.syS 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ujh ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ujh 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.XqU 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:24.429 17:11:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:27.730 Waiting for block devices as requested 00:26:27.730 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:27.730 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:27.730 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:27.730 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:27.730 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:27.730 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:27.730 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:27.730 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:27.730 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:27.990 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:27.990 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:28.250 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:28.250 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:28.250 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:28.250 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:28.511 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:28.511 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:29.457 No valid GPT data, bailing 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:29.457 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:29.457 00:26:29.457 Discovery Log Number of Records 2, Generation counter 2 00:26:29.457 =====Discovery Log Entry 0====== 00:26:29.457 trtype: tcp 00:26:29.457 adrfam: ipv4 00:26:29.457 subtype: current discovery subsystem 00:26:29.457 treq: not specified, sq flow control disable supported 00:26:29.457 portid: 1 00:26:29.457 trsvcid: 4420 00:26:29.457 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:29.457 traddr: 10.0.0.1 00:26:29.457 eflags: none 00:26:29.457 sectype: none 00:26:29.457 =====Discovery Log Entry 1====== 00:26:29.457 trtype: tcp 00:26:29.457 adrfam: ipv4 00:26:29.457 subtype: nvme subsystem 00:26:29.457 treq: not specified, sq flow control disable supported 00:26:29.457 portid: 1 00:26:29.457 trsvcid: 4420 00:26:29.457 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:29.458 traddr: 10.0.0.1 00:26:29.458 eflags: none 00:26:29.458 sectype: none 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.458 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.718 nvme0n1 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.718 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.719 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.980 nvme0n1 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.980 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.241 nvme0n1 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.241 17:11:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.242 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.242 17:11:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.242 nvme0n1 00:26:30.242 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.242 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.242 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.242 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.242 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.502 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.503 nvme0n1 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.503 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.764 nvme0n1 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.764 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.765 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.025 nvme0n1 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:31.025 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.026 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.026 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:31.026 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:31.026 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.026 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:31.026 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.026 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.286 17:11:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.286 nvme0n1 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.286 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.547 nvme0n1 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.547 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.807 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.808 nvme0n1 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.808 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.068 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.068 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.068 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:32.068 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.068 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.068 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.069 nvme0n1 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.069 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.329 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.330 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.330 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.330 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.330 17:11:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.330 17:11:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.330 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.330 17:11:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.590 nvme0n1 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.590 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.591 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.591 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.591 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.852 nvme0n1 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.852 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.112 nvme0n1 00:26:33.112 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.112 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.112 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.112 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.112 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.113 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.113 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.113 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.113 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.113 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.373 17:11:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.634 nvme0n1 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.634 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.895 nvme0n1 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.895 17:11:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.465 nvme0n1 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.465 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.036 nvme0n1 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.036 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.037 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.037 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.037 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.037 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.037 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.037 17:11:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.037 17:11:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:35.037 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.037 17:11:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.608 nvme0n1 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.608 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.179 nvme0n1 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.179 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.180 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.180 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.180 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.180 17:11:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.180 17:11:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:36.180 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.180 17:11:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.750 nvme0n1 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.750 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.751 17:11:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.322 nvme0n1 00:26:37.323 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.323 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.323 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.323 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.323 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.323 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.584 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.155 nvme0n1 00:26:38.155 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.155 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.155 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.155 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.155 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.155 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.416 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.416 17:11:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.416 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.416 17:11:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.416 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.989 nvme0n1 00:26:38.989 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.989 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.989 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.989 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.989 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.989 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:39.253 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.254 17:11:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.895 nvme0n1 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.895 17:11:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.841 nvme0n1 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.841 nvme0n1 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.841 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.103 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.103 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.104 nvme0n1 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.104 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.367 17:11:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.367 nvme0n1 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.367 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.630 nvme0n1 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.630 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.631 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.631 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.631 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.631 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.631 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.631 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.894 nvme0n1 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.894 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.156 nvme0n1 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.156 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.157 17:11:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.419 nvme0n1 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:42.419 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.420 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:42.420 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:42.420 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:42.420 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.420 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.420 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.682 nvme0n1 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.682 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.945 nvme0n1 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.945 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.207 nvme0n1 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.208 17:11:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.470 nvme0n1 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.470 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.732 nvme0n1 00:26:43.732 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.732 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.995 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.256 nvme0n1 00:26:44.256 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.256 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.256 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.256 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.256 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.256 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.256 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.256 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.257 17:11:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.519 nvme0n1 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.519 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.778 nvme0n1 00:26:44.778 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.778 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.778 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.778 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.778 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.038 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.039 17:11:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.607 nvme0n1 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.607 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.866 nvme0n1 00:26:45.866 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.866 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.866 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.866 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.866 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.125 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.125 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.125 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.125 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.125 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.125 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.125 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.126 17:11:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.696 nvme0n1 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.696 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.957 nvme0n1 00:26:46.957 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.957 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.957 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.957 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.957 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.957 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.217 17:11:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.477 nvme0n1 00:26:47.477 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.477 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.477 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.477 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.477 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:47.738 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.739 17:11:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.310 nvme0n1 00:26:48.310 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.310 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.310 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.310 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.310 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.310 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.572 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.144 nvme0n1 00:26:49.144 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.144 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.144 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.144 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.144 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.144 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.406 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.406 17:11:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.406 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.406 17:11:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.406 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.977 nvme0n1 00:26:49.977 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.977 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.977 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.977 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.977 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.977 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.238 17:11:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.825 nvme0n1 00:26:50.825 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.825 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.825 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.825 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.825 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.825 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.825 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.826 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.086 17:11:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.662 nvme0n1 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.662 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.923 nvme0n1 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.923 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.185 nvme0n1 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.185 17:11:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.446 nvme0n1 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.446 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.447 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.708 nvme0n1 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.708 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.969 nvme0n1 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.969 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.230 nvme0n1 00:26:53.230 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.230 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.230 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.230 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.230 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.230 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.230 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.230 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.230 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.230 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.230 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.231 17:11:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.492 nvme0n1 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.492 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.754 nvme0n1 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.754 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.016 nvme0n1 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:54.016 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.017 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.288 nvme0n1 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.288 17:11:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.549 nvme0n1 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.549 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:54.550 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.550 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.811 nvme0n1 00:26:54.811 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.811 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.811 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.811 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.811 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.811 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.072 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.073 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.334 nvme0n1 00:26:55.334 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.334 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.334 17:11:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.334 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.334 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.334 17:11:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.334 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.595 nvme0n1 00:26:55.595 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.595 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.595 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.595 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.595 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.595 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.596 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.858 nvme0n1 00:26:55.858 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.858 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.858 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.858 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.858 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.858 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.119 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.120 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.120 17:11:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.120 17:11:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.120 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.120 17:11:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.381 nvme0n1 00:26:56.381 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.381 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.381 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.381 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.381 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.642 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.903 nvme0n1 00:26:56.903 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.164 17:11:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.424 nvme0n1 00:26:57.424 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.424 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.424 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.424 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.425 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.686 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.947 nvme0n1 00:26:57.947 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.210 17:11:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.782 nvme0n1 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYxNTY5YWUxNmYyM2E2NjYyNTZiOGRiYjIwZGI2ZTY57ooi: 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: ]] 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTg3ZThjZjZmZmYzZTBmYjk1Nzk4NDg5NGI2ZDZhOTdlNWViNmNjMzYzNjAwZWRiNWQ2NmJkNDAzOTI1NWUxNvyV3UM=: 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.782 17:11:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.783 17:11:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.783 17:11:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.783 17:11:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.783 17:11:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.783 17:11:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.783 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.783 17:11:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.354 nvme0n1 00:26:59.354 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.354 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.354 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.354 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.354 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.354 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.615 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.186 nvme0n1 00:27:00.186 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.186 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.186 17:11:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.186 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.186 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.186 17:11:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmMwZjBkZTgzODI4YjU1NWFiZWU5Y2ZiMGExMzhmNzTTg1n0: 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: ]] 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdlNjdjYmY0YTJhYmU5YmM5ZmUxMWI2NWU0YzFmMzmek8q7: 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.447 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.448 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.448 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.448 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.448 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.448 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.448 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.448 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:00.448 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.448 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.019 nvme0n1 00:27:01.019 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.019 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.019 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.019 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.019 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.019 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.019 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.019 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.019 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.019 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjYyMGI5ZmJiMDAyOTZiY2VlOTdjYTE0YjA2ZjNhNTY5Yjg3MDQxMDVhNmU2ZWY56tFsZg==: 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: ]] 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTFlNjhjODFlM2RmNDIzZDZmNWZjZmFiZGIxNmExMGb6b1xT: 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.279 17:11:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.280 17:11:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.280 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.280 17:11:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.850 nvme0n1 00:27:01.850 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.850 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.850 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.850 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.850 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.850 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.850 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.850 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.850 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.850 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjRiZmE0NjMzMzFjMjA5ODU5YTYxZTQ3OTEwMGVmYjliNzg2NDRiZGM2MTdiZmMyNjVjYjNiODc1YjIwYmI5N0tRD30=: 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.111 17:11:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.682 nvme0n1 00:27:02.682 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.682 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.682 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.682 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.682 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.682 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.682 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.682 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.682 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.682 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.942 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWJhYmM4OTBjMmQyNzI4MTZmNzRlZTczY2E5MjJmMWYxNTdkYjBkOWI2N2I5YzAzMyRf/g==: 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU3MDUwODMzZTliMWE3NzJlNGQzNmJmZGIzZWFmMmI5NzJlMzZiODNhYjNkNjA5eHIpCQ==: 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.943 request: 00:27:02.943 { 00:27:02.943 "name": "nvme0", 00:27:02.943 "trtype": "tcp", 00:27:02.943 "traddr": "10.0.0.1", 00:27:02.943 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:02.943 "adrfam": "ipv4", 00:27:02.943 "trsvcid": "4420", 00:27:02.943 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:02.943 "method": "bdev_nvme_attach_controller", 00:27:02.943 "req_id": 1 00:27:02.943 } 00:27:02.943 Got JSON-RPC error response 00:27:02.943 response: 00:27:02.943 { 00:27:02.943 "code": -32602, 00:27:02.943 "message": "Invalid parameters" 00:27:02.943 } 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.943 request: 00:27:02.943 { 00:27:02.943 "name": "nvme0", 00:27:02.943 "trtype": "tcp", 00:27:02.943 "traddr": "10.0.0.1", 00:27:02.943 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:02.943 "adrfam": "ipv4", 00:27:02.943 "trsvcid": "4420", 00:27:02.943 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:02.943 "dhchap_key": "key2", 00:27:02.943 "method": "bdev_nvme_attach_controller", 00:27:02.943 "req_id": 1 00:27:02.943 } 00:27:02.943 Got JSON-RPC error response 00:27:02.943 response: 00:27:02.943 { 00:27:02.943 "code": -32602, 00:27:02.943 "message": "Invalid parameters" 00:27:02.943 } 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.943 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.944 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.241 request: 00:27:03.241 { 00:27:03.241 "name": "nvme0", 00:27:03.241 "trtype": "tcp", 00:27:03.241 "traddr": "10.0.0.1", 00:27:03.241 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:03.241 "adrfam": "ipv4", 00:27:03.241 "trsvcid": "4420", 00:27:03.241 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:03.241 "dhchap_key": "key1", 00:27:03.241 "dhchap_ctrlr_key": "ckey2", 00:27:03.241 "method": "bdev_nvme_attach_controller", 00:27:03.241 "req_id": 1 00:27:03.241 } 00:27:03.241 Got JSON-RPC error response 00:27:03.241 response: 00:27:03.241 { 00:27:03.241 "code": -32602, 00:27:03.241 "message": "Invalid parameters" 00:27:03.241 } 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:03.241 rmmod nvme_tcp 00:27:03.241 rmmod nvme_fabrics 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:03.241 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1611084 ']' 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1611084 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 1611084 ']' 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 1611084 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1611084 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1611084' 00:27:03.242 killing process with pid 1611084 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 1611084 00:27:03.242 17:11:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 1611084 00:27:03.242 17:11:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:03.242 17:11:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:03.242 17:11:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:03.242 17:11:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.242 17:11:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.242 17:11:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.242 17:11:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.242 17:11:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:05.805 17:11:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:08.348 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:08.348 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:08.609 17:11:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Vwk /tmp/spdk.key-null.IsX /tmp/spdk.key-sha256.5KD /tmp/spdk.key-sha384.syS /tmp/spdk.key-sha512.XqU /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:08.609 17:11:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:11.909 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:11.909 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:11.909 00:27:11.909 real 0m56.280s 00:27:11.909 user 0m49.831s 00:27:11.909 sys 0m13.875s 00:27:11.909 17:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:11.909 17:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.909 ************************************ 00:27:11.909 END TEST nvmf_auth_host 00:27:11.909 ************************************ 00:27:12.171 17:11:50 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:27:12.171 17:11:50 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:12.171 17:11:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:12.171 17:11:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:12.171 17:11:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:12.171 ************************************ 00:27:12.172 START TEST nvmf_digest 00:27:12.172 ************************************ 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:12.172 * Looking for test storage... 00:27:12.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.172 17:11:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:20.324 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:20.324 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:20.324 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:20.324 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.324 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:20.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.722 ms 00:27:20.325 00:27:20.325 --- 10.0.0.2 ping statistics --- 00:27:20.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.325 rtt min/avg/max/mdev = 0.722/0.722/0.722/0.000 ms 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:27:20.325 00:27:20.325 --- 10.0.0.1 ping statistics --- 00:27:20.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.325 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:20.325 17:11:57 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:20.325 ************************************ 00:27:20.325 START TEST nvmf_digest_clean 00:27:20.325 ************************************ 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1627120 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1627120 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1627120 ']' 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.325 [2024-05-15 17:11:58.071844] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:27:20.325 [2024-05-15 17:11:58.071911] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.325 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.325 [2024-05-15 17:11:58.142890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.325 [2024-05-15 17:11:58.216904] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.325 [2024-05-15 17:11:58.216939] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.325 [2024-05-15 17:11:58.216946] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.325 [2024-05-15 17:11:58.216953] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.325 [2024-05-15 17:11:58.216958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.325 [2024-05-15 17:11:58.216981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.325 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.325 null0 00:27:20.326 [2024-05-15 17:11:58.947897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.326 [2024-05-15 17:11:58.971870] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:20.326 [2024-05-15 17:11:58.972094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1627462 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1627462 /var/tmp/bperf.sock 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1627462 ']' 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:20.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:20.326 17:11:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.326 [2024-05-15 17:11:59.026002] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:27:20.326 [2024-05-15 17:11:59.026049] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627462 ] 00:27:20.326 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.326 [2024-05-15 17:11:59.101460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.589 [2024-05-15 17:11:59.165708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.162 17:11:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:21.162 17:11:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:21.162 17:11:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:21.163 17:11:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:21.163 17:11:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:21.423 17:12:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.423 17:12:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.684 nvme0n1 00:27:21.684 17:12:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:21.684 17:12:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:21.684 Running I/O for 2 seconds... 00:27:23.596 00:27:23.596 Latency(us) 00:27:23.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.596 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:23.596 nvme0n1 : 2.04 19324.46 75.49 0.00 0.00 6482.47 2880.85 45219.84 00:27:23.596 =================================================================================================================== 00:27:23.597 Total : 19324.46 75.49 0.00 0.00 6482.47 2880.85 45219.84 00:27:23.597 0 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:23.858 | select(.opcode=="crc32c") 00:27:23.858 | "\(.module_name) \(.executed)"' 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1627462 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1627462 ']' 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1627462 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1627462 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1627462' 00:27:23.858 killing process with pid 1627462 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1627462 00:27:23.858 Received shutdown signal, test time was about 2.000000 seconds 00:27:23.858 00:27:23.858 Latency(us) 00:27:23.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.858 =================================================================================================================== 00:27:23.858 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:23.858 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1627462 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1628247 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1628247 /var/tmp/bperf.sock 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1628247 ']' 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:24.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:24.127 17:12:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:24.127 [2024-05-15 17:12:02.824035] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:27:24.127 [2024-05-15 17:12:02.824092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1628247 ] 00:27:24.127 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:24.127 Zero copy mechanism will not be used. 00:27:24.127 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.127 [2024-05-15 17:12:02.899287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.127 [2024-05-15 17:12:02.953327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.066 17:12:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:25.066 17:12:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:25.066 17:12:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:25.066 17:12:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:25.066 17:12:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:25.066 17:12:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.066 17:12:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.325 nvme0n1 00:27:25.325 17:12:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:25.325 17:12:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:25.585 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:25.585 Zero copy mechanism will not be used. 00:27:25.585 Running I/O for 2 seconds... 00:27:27.495 00:27:27.495 Latency(us) 00:27:27.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.495 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:27.495 nvme0n1 : 2.00 3177.67 397.21 0.00 0.00 5032.50 1228.80 14745.60 00:27:27.495 =================================================================================================================== 00:27:27.495 Total : 3177.67 397.21 0.00 0.00 5032.50 1228.80 14745.60 00:27:27.495 0 00:27:27.495 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:27.495 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:27.495 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:27.495 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:27.495 | select(.opcode=="crc32c") 00:27:27.495 | "\(.module_name) \(.executed)"' 00:27:27.495 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:27.754 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1628247 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1628247 ']' 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1628247 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1628247 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1628247' 00:27:27.755 killing process with pid 1628247 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1628247 00:27:27.755 Received shutdown signal, test time was about 2.000000 seconds 00:27:27.755 00:27:27.755 Latency(us) 00:27:27.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.755 =================================================================================================================== 00:27:27.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:27.755 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1628247 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1628920 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1628920 /var/tmp/bperf.sock 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1628920 ']' 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:28.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:28.036 17:12:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:28.036 [2024-05-15 17:12:06.649121] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:27:28.036 [2024-05-15 17:12:06.649175] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1628920 ] 00:27:28.036 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.036 [2024-05-15 17:12:06.725155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.036 [2024-05-15 17:12:06.778375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.610 17:12:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:28.610 17:12:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:28.610 17:12:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:28.610 17:12:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:28.610 17:12:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:28.872 17:12:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:28.872 17:12:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:29.134 nvme0n1 00:27:29.134 17:12:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:29.134 17:12:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:29.395 Running I/O for 2 seconds... 00:27:31.311 00:27:31.311 Latency(us) 00:27:31.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.311 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:31.311 nvme0n1 : 2.01 22012.54 85.99 0.00 0.00 5806.76 2293.76 11468.80 00:27:31.311 =================================================================================================================== 00:27:31.311 Total : 22012.54 85.99 0.00 0.00 5806.76 2293.76 11468.80 00:27:31.311 0 00:27:31.311 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:31.311 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:31.311 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:31.311 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:31.311 | select(.opcode=="crc32c") 00:27:31.311 | "\(.module_name) \(.executed)"' 00:27:31.311 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1628920 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1628920 ']' 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1628920 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1628920 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:31.572 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1628920' 00:27:31.573 killing process with pid 1628920 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1628920 00:27:31.573 Received shutdown signal, test time was about 2.000000 seconds 00:27:31.573 00:27:31.573 Latency(us) 00:27:31.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.573 =================================================================================================================== 00:27:31.573 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1628920 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1630049 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1630049 /var/tmp/bperf.sock 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1630049 ']' 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:31.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:31.573 17:12:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.573 [2024-05-15 17:12:10.399705] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:27:31.573 [2024-05-15 17:12:10.399757] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1630049 ] 00:27:31.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:31.573 Zero copy mechanism will not be used. 00:27:31.833 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.834 [2024-05-15 17:12:10.473713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.834 [2024-05-15 17:12:10.526490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.407 17:12:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:32.407 17:12:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:32.407 17:12:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:32.407 17:12:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:32.407 17:12:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:32.669 17:12:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:32.669 17:12:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:32.930 nvme0n1 00:27:32.930 17:12:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:32.930 17:12:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:32.930 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:32.930 Zero copy mechanism will not be used. 00:27:32.930 Running I/O for 2 seconds... 00:27:35.473 00:27:35.473 Latency(us) 00:27:35.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.473 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:35.473 nvme0n1 : 2.00 3597.97 449.75 0.00 0.00 4438.93 1952.43 8519.68 00:27:35.473 =================================================================================================================== 00:27:35.473 Total : 3597.97 449.75 0.00 0.00 4438.93 1952.43 8519.68 00:27:35.473 0 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:35.473 | select(.opcode=="crc32c") 00:27:35.473 | "\(.module_name) \(.executed)"' 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1630049 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1630049 ']' 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1630049 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1630049 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1630049' 00:27:35.473 killing process with pid 1630049 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1630049 00:27:35.473 Received shutdown signal, test time was about 2.000000 seconds 00:27:35.473 00:27:35.473 Latency(us) 00:27:35.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.473 =================================================================================================================== 00:27:35.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:35.473 17:12:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1630049 00:27:35.473 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1627120 00:27:35.473 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1627120 ']' 00:27:35.473 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1627120 00:27:35.473 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1627120 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1627120' 00:27:35.474 killing process with pid 1627120 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1627120 00:27:35.474 [2024-05-15 17:12:14.107913] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1627120 00:27:35.474 00:27:35.474 real 0m16.231s 00:27:35.474 user 0m31.963s 00:27:35.474 sys 0m3.292s 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:35.474 ************************************ 00:27:35.474 END TEST nvmf_digest_clean 00:27:35.474 ************************************ 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:35.474 ************************************ 00:27:35.474 START TEST nvmf_digest_error 00:27:35.474 ************************************ 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1630853 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1630853 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1630853 ']' 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:35.474 17:12:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.734 [2024-05-15 17:12:14.345584] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:27:35.735 [2024-05-15 17:12:14.345644] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.735 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.735 [2024-05-15 17:12:14.412787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.735 [2024-05-15 17:12:14.482879] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.735 [2024-05-15 17:12:14.482917] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.735 [2024-05-15 17:12:14.482925] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.735 [2024-05-15 17:12:14.482931] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.735 [2024-05-15 17:12:14.482936] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.735 [2024-05-15 17:12:14.482954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.306 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:36.306 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:36.306 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.306 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.306 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:36.567 [2024-05-15 17:12:15.160908] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:36.567 null0 00:27:36.567 [2024-05-15 17:12:15.241787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.567 [2024-05-15 17:12:15.265779] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:36.567 [2024-05-15 17:12:15.266000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1631094 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1631094 /var/tmp/bperf.sock 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1631094 ']' 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:36.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:36.567 17:12:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:36.567 [2024-05-15 17:12:15.317801] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:27:36.567 [2024-05-15 17:12:15.317847] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631094 ] 00:27:36.568 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.568 [2024-05-15 17:12:15.391602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.828 [2024-05-15 17:12:15.445090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.398 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:37.398 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:37.398 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:37.398 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:37.658 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:37.658 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.659 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.659 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.659 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.659 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.919 nvme0n1 00:27:37.919 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:37.919 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.919 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:37.919 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.919 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:37.919 17:12:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:37.919 Running I/O for 2 seconds... 00:27:37.919 [2024-05-15 17:12:16.726662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:37.919 [2024-05-15 17:12:16.726692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.919 [2024-05-15 17:12:16.726701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.919 [2024-05-15 17:12:16.737420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:37.919 [2024-05-15 17:12:16.737439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.919 [2024-05-15 17:12:16.737446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.919 [2024-05-15 17:12:16.749268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:37.919 [2024-05-15 17:12:16.749286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.919 [2024-05-15 17:12:16.749294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.762129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.762147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.762154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.773942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.773964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.773970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.786924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.786940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.786947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.800363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.800380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.800386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.811308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.811324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.811331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.825541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.825562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.825568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.836829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.836845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.836851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.848798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.848815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.848821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.860194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.860210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.860217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.873061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.873078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.873084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.885399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.885416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.885423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.897807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.897824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.897830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.909365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.909381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.909388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.921587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.921604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.921610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.934934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.934950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.934957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.946469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.946485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.946491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.183 [2024-05-15 17:12:16.959131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.183 [2024-05-15 17:12:16.959147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.183 [2024-05-15 17:12:16.959154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.184 [2024-05-15 17:12:16.970195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.184 [2024-05-15 17:12:16.970212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.184 [2024-05-15 17:12:16.970218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.184 [2024-05-15 17:12:16.984783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.184 [2024-05-15 17:12:16.984799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.184 [2024-05-15 17:12:16.984809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.184 [2024-05-15 17:12:16.997386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.184 [2024-05-15 17:12:16.997402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.184 [2024-05-15 17:12:16.997409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.184 [2024-05-15 17:12:17.009621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.184 [2024-05-15 17:12:17.009638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.184 [2024-05-15 17:12:17.009644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.020912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.020929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.020935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.033431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.033448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.033454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.045991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.046008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.046014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.058393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.058409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.058415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.071548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.071564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.071570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.082454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.082471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.082477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.094433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.094450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.094456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.106947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.106964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.106970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.119330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.119347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.119353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.130972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.130989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.130995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.143695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.143711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.143718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.154914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.154931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.154937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.167603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.167620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.167626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.179634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.179651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.179657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.192340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.192356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.192365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.204065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.204081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.204088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.217510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.217527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.217533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.228611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.228627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.228634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.241436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.241452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.241458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.254250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.254266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.254272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.265231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.265247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.265253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.278016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.278032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.278038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.291980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.291997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.292003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.303716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.303734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.303741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.314942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.314959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.314965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.327903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.327920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.536 [2024-05-15 17:12:17.327926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.536 [2024-05-15 17:12:17.340293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.536 [2024-05-15 17:12:17.340310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.537 [2024-05-15 17:12:17.340317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.537 [2024-05-15 17:12:17.353226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.537 [2024-05-15 17:12:17.353242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.537 [2024-05-15 17:12:17.353248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.365587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.365603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.365609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.377073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.377089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.377095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.388784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.388800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.388806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.401732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.401748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.401754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.413457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.413473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.413479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.425343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.425359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.425366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.437323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.437340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.437346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.449897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.449914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.449920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.462282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.462298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.462304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.474764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.474780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.474786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.487037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.487053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.487059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.498563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.498580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.498586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.511188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.511204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.511213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.523748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.523764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.523770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.535709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.535726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.814 [2024-05-15 17:12:17.535733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.814 [2024-05-15 17:12:17.547183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.814 [2024-05-15 17:12:17.547200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.815 [2024-05-15 17:12:17.547205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.815 [2024-05-15 17:12:17.560222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.815 [2024-05-15 17:12:17.560239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.815 [2024-05-15 17:12:17.560245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.815 [2024-05-15 17:12:17.573712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.815 [2024-05-15 17:12:17.573729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.815 [2024-05-15 17:12:17.573735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.815 [2024-05-15 17:12:17.585018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.815 [2024-05-15 17:12:17.585035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.815 [2024-05-15 17:12:17.585041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.815 [2024-05-15 17:12:17.596514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.815 [2024-05-15 17:12:17.596531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.815 [2024-05-15 17:12:17.596539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.815 [2024-05-15 17:12:17.609918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.815 [2024-05-15 17:12:17.609934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.815 [2024-05-15 17:12:17.609940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.815 [2024-05-15 17:12:17.621857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.815 [2024-05-15 17:12:17.621876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.815 [2024-05-15 17:12:17.621883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.815 [2024-05-15 17:12:17.632269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.815 [2024-05-15 17:12:17.632286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.815 [2024-05-15 17:12:17.632292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:38.815 [2024-05-15 17:12:17.645477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:38.815 [2024-05-15 17:12:17.645493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:38.815 [2024-05-15 17:12:17.645500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.076 [2024-05-15 17:12:17.658601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.076 [2024-05-15 17:12:17.658618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.076 [2024-05-15 17:12:17.658625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.076 [2024-05-15 17:12:17.671212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.076 [2024-05-15 17:12:17.671227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.076 [2024-05-15 17:12:17.671235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.076 [2024-05-15 17:12:17.682329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.076 [2024-05-15 17:12:17.682345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.076 [2024-05-15 17:12:17.682352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.076 [2024-05-15 17:12:17.694029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.076 [2024-05-15 17:12:17.694046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.076 [2024-05-15 17:12:17.694052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.076 [2024-05-15 17:12:17.706766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.076 [2024-05-15 17:12:17.706783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.076 [2024-05-15 17:12:17.706789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.076 [2024-05-15 17:12:17.718441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.076 [2024-05-15 17:12:17.718458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.076 [2024-05-15 17:12:17.718464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.076 [2024-05-15 17:12:17.731371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.076 [2024-05-15 17:12:17.731388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.076 [2024-05-15 17:12:17.731394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.076 [2024-05-15 17:12:17.742687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.742704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.742710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.754902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.754919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.754925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.767725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.767741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.767748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.781590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.781607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.781614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.790869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.790886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.790892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.804927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.804943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.804950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.819211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.819228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.819234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.830828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.830848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.830854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.843347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.843363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.843369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.855259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.855275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.855282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.868208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.868225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.868231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.879002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.879019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.879025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.892604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.892620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.892626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.077 [2024-05-15 17:12:17.904149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.077 [2024-05-15 17:12:17.904166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.077 [2024-05-15 17:12:17.904172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:17.916204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:17.916221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:17.916228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:17.927836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:17.927853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:17.927859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:17.940300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:17.940316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:17.940323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:17.953209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:17.953226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:17.953232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:17.965711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:17.965728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:17.965734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:17.978839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:17.978855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:17.978862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:17.990283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:17.990299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:17.990305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:18.002732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:18.002749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:18.002755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:18.014212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:18.014229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:18.014235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:18.027144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:18.027161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:18.027167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:18.039822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:18.039838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:18.039848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:18.050174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:18.050191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:18.050197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.338 [2024-05-15 17:12:18.063081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.338 [2024-05-15 17:12:18.063098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.338 [2024-05-15 17:12:18.063104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.339 [2024-05-15 17:12:18.075865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.339 [2024-05-15 17:12:18.075882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.339 [2024-05-15 17:12:18.075888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.339 [2024-05-15 17:12:18.088842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.339 [2024-05-15 17:12:18.088859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.339 [2024-05-15 17:12:18.088865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.339 [2024-05-15 17:12:18.100498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.339 [2024-05-15 17:12:18.100515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.339 [2024-05-15 17:12:18.100521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.339 [2024-05-15 17:12:18.112729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.339 [2024-05-15 17:12:18.112746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.339 [2024-05-15 17:12:18.112753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.339 [2024-05-15 17:12:18.124760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.339 [2024-05-15 17:12:18.124777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.339 [2024-05-15 17:12:18.124783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.339 [2024-05-15 17:12:18.137330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.339 [2024-05-15 17:12:18.137347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.339 [2024-05-15 17:12:18.137353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.339 [2024-05-15 17:12:18.149935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.339 [2024-05-15 17:12:18.149955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.339 [2024-05-15 17:12:18.149961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.339 [2024-05-15 17:12:18.162330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.339 [2024-05-15 17:12:18.162347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.339 [2024-05-15 17:12:18.162353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.339 [2024-05-15 17:12:18.171996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.339 [2024-05-15 17:12:18.172012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.339 [2024-05-15 17:12:18.172018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.185518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.185535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.185541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.198705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.198721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.198728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.209737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.209754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.209760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.223114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.223131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.223138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.234490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.234507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.234513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.245762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.245779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.245785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.258300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.258317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.258323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.270581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.270598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.270604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.284431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.284447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.284453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.296576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.296593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.296599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.307858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.307875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.307881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.319457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.319473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.319479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.333910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.333927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.333933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.343977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.343993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.343999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.355841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.355858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.355868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.368793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.368810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.368816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.382476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.382492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.382498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.394216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.394232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.394238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.406090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.406106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.406112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.419723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.419739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.419745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.600 [2024-05-15 17:12:18.430508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.600 [2024-05-15 17:12:18.430524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.600 [2024-05-15 17:12:18.430531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.442905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.442922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.442928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.456339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.456355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.456361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.467360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.467377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.467383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.479552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.479569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.479575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.492759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.492776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.492783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.505303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.505320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.505326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.516936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.516953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.516959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.530081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.530097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.530103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.541231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.541248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.541254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.552833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.552848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.552854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.565607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.565624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.565633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.579251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.579268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.579274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.590287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.590303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.590309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.601684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.601700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.601706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.614242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.614258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.614264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.627281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.627297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.627303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.639865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.639881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.639887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.651020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.651036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.651043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.862 [2024-05-15 17:12:18.664076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.862 [2024-05-15 17:12:18.664092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.862 [2024-05-15 17:12:18.664098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.863 [2024-05-15 17:12:18.676976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.863 [2024-05-15 17:12:18.676995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.863 [2024-05-15 17:12:18.677002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.863 [2024-05-15 17:12:18.687604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:39.863 [2024-05-15 17:12:18.687620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.863 [2024-05-15 17:12:18.687626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.123 [2024-05-15 17:12:18.700195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ce770) 00:27:40.123 [2024-05-15 17:12:18.700212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.123 [2024-05-15 17:12:18.700218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.123 00:27:40.123 Latency(us) 00:27:40.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.123 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:40.123 nvme0n1 : 2.00 20711.36 80.90 0.00 0.00 6174.28 2484.91 22063.79 00:27:40.123 =================================================================================================================== 00:27:40.123 Total : 20711.36 80.90 0.00 0.00 6174.28 2484.91 22063.79 00:27:40.123 0 00:27:40.123 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:40.123 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:40.123 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:40.123 | .driver_specific 00:27:40.123 | .nvme_error 00:27:40.123 | .status_code 00:27:40.123 | .command_transient_transport_error' 00:27:40.123 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:40.123 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:27:40.123 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1631094 00:27:40.123 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1631094 ']' 00:27:40.123 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1631094 00:27:40.123 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:40.123 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:40.123 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1631094 00:27:40.383 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:40.383 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:40.383 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1631094' 00:27:40.383 killing process with pid 1631094 00:27:40.383 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1631094 00:27:40.383 Received shutdown signal, test time was about 2.000000 seconds 00:27:40.383 00:27:40.383 Latency(us) 00:27:40.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.383 =================================================================================================================== 00:27:40.383 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.383 17:12:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1631094 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1631775 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1631775 /var/tmp/bperf.sock 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1631775 ']' 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:40.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:40.383 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.383 [2024-05-15 17:12:19.119879] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:27:40.383 [2024-05-15 17:12:19.119930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631775 ] 00:27:40.383 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:40.383 Zero copy mechanism will not be used. 00:27:40.383 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.383 [2024-05-15 17:12:19.193473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.644 [2024-05-15 17:12:19.246583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.215 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:41.215 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:41.215 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:41.215 17:12:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:41.215 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:41.215 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.215 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.215 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.215 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.476 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.738 nvme0n1 00:27:41.738 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:41.738 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.738 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.738 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.738 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:41.738 17:12:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:41.738 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:41.738 Zero copy mechanism will not be used. 00:27:41.738 Running I/O for 2 seconds... 00:27:41.738 [2024-05-15 17:12:20.551803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:41.738 [2024-05-15 17:12:20.551833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.738 [2024-05-15 17:12:20.551841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.738 [2024-05-15 17:12:20.561252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:41.738 [2024-05-15 17:12:20.561272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.738 [2024-05-15 17:12:20.561279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.738 [2024-05-15 17:12:20.568369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:41.738 [2024-05-15 17:12:20.568387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.738 [2024-05-15 17:12:20.568394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.000 [2024-05-15 17:12:20.575341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.000 [2024-05-15 17:12:20.575358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.000 [2024-05-15 17:12:20.575365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.000 [2024-05-15 17:12:20.582037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.000 [2024-05-15 17:12:20.582055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.000 [2024-05-15 17:12:20.582061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.000 [2024-05-15 17:12:20.588351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.000 [2024-05-15 17:12:20.588368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.000 [2024-05-15 17:12:20.588374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.000 [2024-05-15 17:12:20.594379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.000 [2024-05-15 17:12:20.594395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.000 [2024-05-15 17:12:20.594405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.000 [2024-05-15 17:12:20.600437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.000 [2024-05-15 17:12:20.600454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.000 [2024-05-15 17:12:20.600460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.000 [2024-05-15 17:12:20.606138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.000 [2024-05-15 17:12:20.606155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.000 [2024-05-15 17:12:20.606161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.000 [2024-05-15 17:12:20.612102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.000 [2024-05-15 17:12:20.612118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.000 [2024-05-15 17:12:20.612125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.000 [2024-05-15 17:12:20.617758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.000 [2024-05-15 17:12:20.617775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.000 [2024-05-15 17:12:20.617781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.623413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.623430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.623436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.629121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.629138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.629144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.634688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.634704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.634710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.640096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.640112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.640118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.645633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.645652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.645659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.651565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.651582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.651588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.657230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.657247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.657253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.663179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.663195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.663202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.669671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.669688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.669694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.678791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.678808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.678814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.688041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.688058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.688065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.697379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.697396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.697402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.707476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.707492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.707499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.717090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.717108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.717114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.728414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.728431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.728437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.738575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.738591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.738597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.746586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.746603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.746609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.756166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.756183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.756189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.762423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.762441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.762447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.773392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.773410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.773416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.782974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.782991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.782998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.793520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.793538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.793553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.801428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.801447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.801453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.810023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.810040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.001 [2024-05-15 17:12:20.810046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.001 [2024-05-15 17:12:20.819191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.001 [2024-05-15 17:12:20.819209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.002 [2024-05-15 17:12:20.819215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.002 [2024-05-15 17:12:20.829328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.002 [2024-05-15 17:12:20.829346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.002 [2024-05-15 17:12:20.829353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.263 [2024-05-15 17:12:20.837928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.263 [2024-05-15 17:12:20.837946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.263 [2024-05-15 17:12:20.837953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.263 [2024-05-15 17:12:20.848031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.263 [2024-05-15 17:12:20.848049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.848055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.856566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.856584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.856590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.866625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.866642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.866649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.876887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.876905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.876910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.886769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.886786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.886793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.897442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.897460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.897466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.904998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.905017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.905022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.914049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.914067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.914073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.925058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.925076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.925082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.933709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.933726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.933732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.943092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.943110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.943116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.951408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.951426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.951435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.960269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.960287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.960293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.970267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.970284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.970290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.979420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.979438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.979444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.988529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.988552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.988558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:20.997317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:20.997336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:20.997342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:21.007587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:21.007605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:21.007611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:21.016824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:21.016842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:21.016848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:21.025540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:21.025562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:21.025569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:21.037346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:21.037366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:21.037372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:21.050236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:21.050254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:21.050260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:21.063355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:21.063373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:21.063379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:21.072550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:21.072567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:21.072573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:21.080929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:21.080946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:21.080952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.264 [2024-05-15 17:12:21.088729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.264 [2024-05-15 17:12:21.088747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.264 [2024-05-15 17:12:21.088753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.526 [2024-05-15 17:12:21.098708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.526 [2024-05-15 17:12:21.098725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.526 [2024-05-15 17:12:21.098732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.108062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.108080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.108086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.116862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.116881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.116887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.127275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.127293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.127300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.139255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.139273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.139279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.150283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.150301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.150307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.163170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.163187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.163194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.173188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.173206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.173212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.182840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.182859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.182865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.190965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.190983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.190989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.201725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.201743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.201749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.212372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.212390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.212399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.221397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.221414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.221420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.232015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.232034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.232040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.241390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.241407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.241414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.249898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.249916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.249922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.258152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.258170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.258176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.268009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.268027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.268033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.278152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.278170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.278176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.288439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.288457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.288463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.297568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.297589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.297595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.308450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.308468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.308474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.318206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.318224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.318231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.327414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.327432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.327438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.336392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.336411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.336417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.347184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.347202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.347208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.527 [2024-05-15 17:12:21.356752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.527 [2024-05-15 17:12:21.356770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.527 [2024-05-15 17:12:21.356776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.365177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.365196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.365202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.374908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.374926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.374932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.384285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.384303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.384309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.394308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.394326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.394332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.403096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.403113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.403120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.410313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.410331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.410337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.420085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.420102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.420108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.428941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.428959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.428965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.439778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.439796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.439803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.448376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.448394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.448400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.458420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.458438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.458450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.468621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.468639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.468645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.478479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.478497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.478503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.488493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.488510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.488516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.789 [2024-05-15 17:12:21.498170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.789 [2024-05-15 17:12:21.498188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.789 [2024-05-15 17:12:21.498194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.506777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.506795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.506802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.516582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.516600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.516606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.526278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.526296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.526302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.534543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.534566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.534572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.543983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.544000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.544007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.552952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.552970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.552976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.562450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.562468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.562474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.572970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.572988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.572994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.581818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.581836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.581842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.590910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.590927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.590933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.601294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.601312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.601318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.610181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.610198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.610205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.790 [2024-05-15 17:12:21.619761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:42.790 [2024-05-15 17:12:21.619778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.790 [2024-05-15 17:12:21.619787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.052 [2024-05-15 17:12:21.629094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.052 [2024-05-15 17:12:21.629112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-05-15 17:12:21.629119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.052 [2024-05-15 17:12:21.639459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.052 [2024-05-15 17:12:21.639477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-05-15 17:12:21.639484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.052 [2024-05-15 17:12:21.648892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.052 [2024-05-15 17:12:21.648910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.052 [2024-05-15 17:12:21.648916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.658126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.658143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.658150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.667116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.667133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.667139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.676299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.676317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.676323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.685802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.685820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.685825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.700069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.700086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.700092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.713498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.713518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.713524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.723595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.723613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.723619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.732157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.732174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.732180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.741300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.741317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.741323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.752919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.752937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.752943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.762446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.762463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.762469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.771758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.771775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.771781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.781858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.781876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.781882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.793859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.793876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.793882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.803422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.803439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.803445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.812871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.812888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.812894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.822767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.822785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.822791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.832718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.832736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.832742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.841558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.841575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.841581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.850709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.850727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.850734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.859891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.859908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.859914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.870072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.870090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.870096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.053 [2024-05-15 17:12:21.880073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.053 [2024-05-15 17:12:21.880090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.053 [2024-05-15 17:12:21.880099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:21.890215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:21.890233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:21.890239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:21.900780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:21.900797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:21.900803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:21.912393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:21.912410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:21.912416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:21.922400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:21.922417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:21.922423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:21.932039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:21.932056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:21.932062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:21.942539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:21.942560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:21.942566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:21.953212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:21.953229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:21.953235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:21.965147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:21.965164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:21.965171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:21.974959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:21.974978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:21.974984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:21.984688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:21.984705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:21.984711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:21.994235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:21.994252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:21.994259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:22.004035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:22.004053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:22.004059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:22.013762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:22.013779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:22.013785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:22.023743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:22.023761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:22.023767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:22.037169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:22.037186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:22.037192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:22.046209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:22.046227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:22.046233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:22.056062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:22.056079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:22.056085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:22.066874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:22.066892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:22.066898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:22.076339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:22.076356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:22.076363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:22.085942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:22.085959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.316 [2024-05-15 17:12:22.085965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.316 [2024-05-15 17:12:22.096334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.316 [2024-05-15 17:12:22.096351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.317 [2024-05-15 17:12:22.096357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.317 [2024-05-15 17:12:22.105330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.317 [2024-05-15 17:12:22.105347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.317 [2024-05-15 17:12:22.105353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.317 [2024-05-15 17:12:22.114627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.317 [2024-05-15 17:12:22.114644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.317 [2024-05-15 17:12:22.114650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.317 [2024-05-15 17:12:22.123730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.317 [2024-05-15 17:12:22.123747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.317 [2024-05-15 17:12:22.123753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.317 [2024-05-15 17:12:22.133522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.317 [2024-05-15 17:12:22.133540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.317 [2024-05-15 17:12:22.133550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.317 [2024-05-15 17:12:22.143423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.317 [2024-05-15 17:12:22.143441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.317 [2024-05-15 17:12:22.143450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.153943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.153962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.153967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.163662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.163679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.163685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.174042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.174060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.174065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.183954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.183972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.183979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.192268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.192285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.192291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.200567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.200584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.200590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.210943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.210961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.210967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.220928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.220946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.220952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.231435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.231453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.231459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.242687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.242704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.242710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.248366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.248382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.248388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.255439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.255457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.255463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.264345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.264362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.264369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.275075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.275092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.275098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.287801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.287818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.287824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.300607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.300624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.300630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.313105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.313123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.313132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.324433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.324451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.324457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.333312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.333330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.333336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.344205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.579 [2024-05-15 17:12:22.344222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.579 [2024-05-15 17:12:22.344228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.579 [2024-05-15 17:12:22.354282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.580 [2024-05-15 17:12:22.354300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.580 [2024-05-15 17:12:22.354306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.580 [2024-05-15 17:12:22.364566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.580 [2024-05-15 17:12:22.364583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.580 [2024-05-15 17:12:22.364589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.580 [2024-05-15 17:12:22.376010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.580 [2024-05-15 17:12:22.376028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.580 [2024-05-15 17:12:22.376034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.580 [2024-05-15 17:12:22.387152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.580 [2024-05-15 17:12:22.387170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.580 [2024-05-15 17:12:22.387177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.580 [2024-05-15 17:12:22.397186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.580 [2024-05-15 17:12:22.397203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.580 [2024-05-15 17:12:22.397209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.580 [2024-05-15 17:12:22.406439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.580 [2024-05-15 17:12:22.406459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.580 [2024-05-15 17:12:22.406465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.416640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.416657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.842 [2024-05-15 17:12:22.416663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.426050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.426068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.842 [2024-05-15 17:12:22.426073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.436951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.436969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.842 [2024-05-15 17:12:22.436975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.446279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.446295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.842 [2024-05-15 17:12:22.446302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.456868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.456886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.842 [2024-05-15 17:12:22.456892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.465404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.465421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.842 [2024-05-15 17:12:22.465427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.473757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.473775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.842 [2024-05-15 17:12:22.473781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.482844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.482862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.842 [2024-05-15 17:12:22.482868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.493221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.493238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.842 [2024-05-15 17:12:22.493245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.500979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.500997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.842 [2024-05-15 17:12:22.501002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.510340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.510357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.842 [2024-05-15 17:12:22.510364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.842 [2024-05-15 17:12:22.519534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.842 [2024-05-15 17:12:22.519556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.843 [2024-05-15 17:12:22.519562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:43.843 [2024-05-15 17:12:22.529090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.843 [2024-05-15 17:12:22.529108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.843 [2024-05-15 17:12:22.529114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:43.843 [2024-05-15 17:12:22.539328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x216f980) 00:27:43.843 [2024-05-15 17:12:22.539345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.843 [2024-05-15 17:12:22.539351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:43.843 00:27:43.843 Latency(us) 00:27:43.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.843 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:43.843 nvme0n1 : 2.00 3271.57 408.95 0.00 0.00 4886.52 1024.00 14417.92 00:27:43.843 =================================================================================================================== 00:27:43.843 Total : 3271.57 408.95 0.00 0.00 4886.52 1024.00 14417.92 00:27:43.843 0 00:27:43.843 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:43.843 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:43.843 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:43.843 | .driver_specific 00:27:43.843 | .nvme_error 00:27:43.843 | .status_code 00:27:43.843 | .command_transient_transport_error' 00:27:43.843 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1631775 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1631775 ']' 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1631775 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1631775 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1631775' 00:27:44.104 killing process with pid 1631775 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1631775 00:27:44.104 Received shutdown signal, test time was about 2.000000 seconds 00:27:44.104 00:27:44.104 Latency(us) 00:27:44.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.104 =================================================================================================================== 00:27:44.104 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1631775 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1632455 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1632455 /var/tmp/bperf.sock 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1632455 ']' 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:44.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:44.104 17:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.365 [2024-05-15 17:12:22.952240] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:27:44.365 [2024-05-15 17:12:22.952298] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632455 ] 00:27:44.365 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.365 [2024-05-15 17:12:23.026991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.365 [2024-05-15 17:12:23.079769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.936 17:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:44.936 17:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:44.936 17:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:44.936 17:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.197 17:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:45.197 17:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.197 17:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.197 17:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.197 17:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.197 17:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.459 nvme0n1 00:27:45.459 17:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:45.459 17:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.459 17:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.459 17:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.459 17:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:45.459 17:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:45.459 Running I/O for 2 seconds... 00:27:45.459 [2024-05-15 17:12:24.198923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e8088 00:27:45.459 [2024-05-15 17:12:24.200479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.459 [2024-05-15 17:12:24.200507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:45.459 [2024-05-15 17:12:24.209708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e0a68 00:27:45.459 [2024-05-15 17:12:24.210887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.459 [2024-05-15 17:12:24.210904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:45.459 [2024-05-15 17:12:24.222692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e2c28 00:27:45.459 [2024-05-15 17:12:24.224233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.459 [2024-05-15 17:12:24.224249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:45.459 [2024-05-15 17:12:24.232160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190fd208 00:27:45.459 [2024-05-15 17:12:24.233050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.459 [2024-05-15 17:12:24.233066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:45.459 [2024-05-15 17:12:24.245006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e9168 00:27:45.459 [2024-05-15 17:12:24.246177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.459 [2024-05-15 17:12:24.246193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:45.459 [2024-05-15 17:12:24.257912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190eb328 00:27:45.459 [2024-05-15 17:12:24.259449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.459 [2024-05-15 17:12:24.259465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:45.459 [2024-05-15 17:12:24.267359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e3d08 00:27:45.459 [2024-05-15 17:12:24.268206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.459 [2024-05-15 17:12:24.268221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:45.459 [2024-05-15 17:12:24.280194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f8e88 00:27:45.459 [2024-05-15 17:12:24.281365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.459 [2024-05-15 17:12:24.281380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:45.459 [2024-05-15 17:12:24.293110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190fb048 00:27:45.721 [2024-05-15 17:12:24.294643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.721 [2024-05-15 17:12:24.294659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:45.721 [2024-05-15 17:12:24.302576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190ec408 00:27:45.721 [2024-05-15 17:12:24.303456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.721 [2024-05-15 17:12:24.303471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:45.721 [2024-05-15 17:12:24.315407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f5be8 00:27:45.721 [2024-05-15 17:12:24.316578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.721 [2024-05-15 17:12:24.316593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:45.721 [2024-05-15 17:12:24.326851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e4140 00:27:45.721 [2024-05-15 17:12:24.328005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.721 [2024-05-15 17:12:24.328020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:45.721 [2024-05-15 17:12:24.339543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190fc560 00:27:45.721 [2024-05-15 17:12:24.340886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.721 [2024-05-15 17:12:24.340905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:45.721 [2024-05-15 17:12:24.351580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f3e60 00:27:45.721 [2024-05-15 17:12:24.352879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.721 [2024-05-15 17:12:24.352894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:45.721 [2024-05-15 17:12:24.362769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f2d80 00:27:45.721 [2024-05-15 17:12:24.364080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.721 [2024-05-15 17:12:24.364095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:45.721 [2024-05-15 17:12:24.375638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.721 [2024-05-15 17:12:24.377130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.721 [2024-05-15 17:12:24.377145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:45.721 [2024-05-15 17:12:24.387390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190fb048 00:27:45.721 [2024-05-15 17:12:24.388889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.721 [2024-05-15 17:12:24.388904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:45.721 [2024-05-15 17:12:24.399109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f9f68 00:27:45.722 [2024-05-15 17:12:24.400600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.400615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.410844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f0350 00:27:45.722 [2024-05-15 17:12:24.412333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.412348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.422585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e2c28 00:27:45.722 [2024-05-15 17:12:24.424076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.424091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.434314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e5ec8 00:27:45.722 [2024-05-15 17:12:24.435767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.435782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.445455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f4298 00:27:45.722 [2024-05-15 17:12:24.446936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.446951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.455284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190fb8b8 00:27:45.722 [2024-05-15 17:12:24.456272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.456287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.469848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190ee190 00:27:45.722 [2024-05-15 17:12:24.471647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.471662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.480473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e3d08 00:27:45.722 [2024-05-15 17:12:24.481764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.481779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.492354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.722 [2024-05-15 17:12:24.493640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.493655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.504064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.722 [2024-05-15 17:12:24.505379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.505395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.515806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.722 [2024-05-15 17:12:24.517080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.517096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.527524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.722 [2024-05-15 17:12:24.528845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.528860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.539244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.722 [2024-05-15 17:12:24.540562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.540578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.722 [2024-05-15 17:12:24.550958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.722 [2024-05-15 17:12:24.552274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.722 [2024-05-15 17:12:24.552290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.562656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.563977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.563992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.574363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.575682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.575697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.586082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.587402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.587417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.597776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.599100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.599116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.609465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.610802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.610817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.621166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.622488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.622503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.632860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.634173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.634189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.644584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.645904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.645922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.656316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.657636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.657651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.668077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.669394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.669410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.679775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.681105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.681120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.691500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.692800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.692816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.703210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.704530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.704549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.714936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.716257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.716272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.726686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.728003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.728019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.738407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.739687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.739703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.750138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.751466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.751481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.984 [2024-05-15 17:12:24.761874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.984 [2024-05-15 17:12:24.763195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.984 [2024-05-15 17:12:24.763211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.985 [2024-05-15 17:12:24.773589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.985 [2024-05-15 17:12:24.774875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.985 [2024-05-15 17:12:24.774890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.985 [2024-05-15 17:12:24.785314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.985 [2024-05-15 17:12:24.786637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.985 [2024-05-15 17:12:24.786653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.985 [2024-05-15 17:12:24.797025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.985 [2024-05-15 17:12:24.798345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.985 [2024-05-15 17:12:24.798361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:45.985 [2024-05-15 17:12:24.808743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:45.985 [2024-05-15 17:12:24.810066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:45.985 [2024-05-15 17:12:24.810082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.820451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.821792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.247 [2024-05-15 17:12:24.821808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.832178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.833492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.247 [2024-05-15 17:12:24.833508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.843884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.845166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.247 [2024-05-15 17:12:24.845183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.855640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.856946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.247 [2024-05-15 17:12:24.856961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.867366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.868692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.247 [2024-05-15 17:12:24.868707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.879083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.880398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.247 [2024-05-15 17:12:24.880414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.890792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.892111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.247 [2024-05-15 17:12:24.892127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.902517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.903809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.247 [2024-05-15 17:12:24.903824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.914242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.915560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.247 [2024-05-15 17:12:24.915576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.925958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.927247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.247 [2024-05-15 17:12:24.927263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.937903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.939227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.247 [2024-05-15 17:12:24.939243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.247 [2024-05-15 17:12:24.949619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.247 [2024-05-15 17:12:24.950948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:24.950966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.248 [2024-05-15 17:12:24.961352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.248 [2024-05-15 17:12:24.962633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:24.962649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.248 [2024-05-15 17:12:24.973081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.248 [2024-05-15 17:12:24.974408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:24.974424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.248 [2024-05-15 17:12:24.984805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.248 [2024-05-15 17:12:24.986125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:24.986141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.248 [2024-05-15 17:12:24.996516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.248 [2024-05-15 17:12:24.997855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:24.997870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.248 [2024-05-15 17:12:25.008240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.248 [2024-05-15 17:12:25.009557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:25.009573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.248 [2024-05-15 17:12:25.019961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.248 [2024-05-15 17:12:25.021280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:25.021295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.248 [2024-05-15 17:12:25.031663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.248 [2024-05-15 17:12:25.032982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:25.032998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.248 [2024-05-15 17:12:25.043377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.248 [2024-05-15 17:12:25.044699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:25.044715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.248 [2024-05-15 17:12:25.055108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.248 [2024-05-15 17:12:25.056426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:25.056442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.248 [2024-05-15 17:12:25.066821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.248 [2024-05-15 17:12:25.068141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:25.068156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.248 [2024-05-15 17:12:25.078521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.248 [2024-05-15 17:12:25.079843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.248 [2024-05-15 17:12:25.079858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.090244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.091566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.091582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.101959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.103277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.103292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.113683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.114997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.115013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.125370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.126689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.126705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.137102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.138425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.138440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.148803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.150120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.150135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.160530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.161899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.161914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.172254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.173569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.173584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.183970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.185293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.185308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.195657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.196939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.196954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.207370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.208690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.208706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.219068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.220389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.220405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.230802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.232078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.232092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.242542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.243875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.243890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.254272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.255587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.255605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.265972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.267291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.267307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.277699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.279019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.279035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.289400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.290724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.290739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.301119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.302394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.302409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.312854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.314174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.314189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.324580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.325902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.325918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.510 [2024-05-15 17:12:25.336274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.510 [2024-05-15 17:12:25.337589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.510 [2024-05-15 17:12:25.337605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.348012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.349334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.349349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.359744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.361146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.361162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.371551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.372865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.372881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.383253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.384538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.384558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.394963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.396280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.396295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.406672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.407995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.408010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.418392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.419714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.419729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.430113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.431425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.431441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.441807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.443125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.443140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.453510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.454827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.454843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.465222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.466548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.466564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.476911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.478220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.478236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.488621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.489909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.489924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.500309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.501610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.501625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.512021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.513295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.513310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.772 [2024-05-15 17:12:25.523708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.772 [2024-05-15 17:12:25.525025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.772 [2024-05-15 17:12:25.525040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.773 [2024-05-15 17:12:25.535412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.773 [2024-05-15 17:12:25.536684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.773 [2024-05-15 17:12:25.536699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.773 [2024-05-15 17:12:25.547119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.773 [2024-05-15 17:12:25.548441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.773 [2024-05-15 17:12:25.548456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.773 [2024-05-15 17:12:25.558868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.773 [2024-05-15 17:12:25.560186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.773 [2024-05-15 17:12:25.560203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.773 [2024-05-15 17:12:25.570555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.773 [2024-05-15 17:12:25.571870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.773 [2024-05-15 17:12:25.571886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.773 [2024-05-15 17:12:25.582246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.773 [2024-05-15 17:12:25.583560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.773 [2024-05-15 17:12:25.583576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.773 [2024-05-15 17:12:25.593947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:46.773 [2024-05-15 17:12:25.595265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.773 [2024-05-15 17:12:25.595281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.773 [2024-05-15 17:12:25.605660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.606972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.606987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.617359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.618683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.618698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.629065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.630381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.630397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.640741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.642059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.642074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.652450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.653741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.653757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.664153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.665479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.665495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.675858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.677176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.677192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.687543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.688863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.688878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.699256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.700556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.700571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.710944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.712264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.712279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.722652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.723971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.723986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.734351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.735675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.735690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.746058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.747350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.747365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.757768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.759081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.759097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.769486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.770790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.770805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.035 [2024-05-15 17:12:25.781182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.035 [2024-05-15 17:12:25.782497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.035 [2024-05-15 17:12:25.782511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.036 [2024-05-15 17:12:25.792879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.036 [2024-05-15 17:12:25.794191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.036 [2024-05-15 17:12:25.794206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.036 [2024-05-15 17:12:25.804573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.036 [2024-05-15 17:12:25.805886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.036 [2024-05-15 17:12:25.805901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.036 [2024-05-15 17:12:25.816271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.036 [2024-05-15 17:12:25.817584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.036 [2024-05-15 17:12:25.817600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.036 [2024-05-15 17:12:25.827958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.036 [2024-05-15 17:12:25.829272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.036 [2024-05-15 17:12:25.829287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.036 [2024-05-15 17:12:25.839649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.036 [2024-05-15 17:12:25.840966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.036 [2024-05-15 17:12:25.840981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.036 [2024-05-15 17:12:25.851341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.036 [2024-05-15 17:12:25.852640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.036 [2024-05-15 17:12:25.852655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.036 [2024-05-15 17:12:25.863061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.036 [2024-05-15 17:12:25.864385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.036 [2024-05-15 17:12:25.864404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.297 [2024-05-15 17:12:25.874765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.297 [2024-05-15 17:12:25.876083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.297 [2024-05-15 17:12:25.876099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.297 [2024-05-15 17:12:25.886463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.297 [2024-05-15 17:12:25.887792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.297 [2024-05-15 17:12:25.887808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.297 [2024-05-15 17:12:25.898169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.297 [2024-05-15 17:12:25.899485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.297 [2024-05-15 17:12:25.899500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.297 [2024-05-15 17:12:25.909874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.297 [2024-05-15 17:12:25.911184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.297 [2024-05-15 17:12:25.911199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.297 [2024-05-15 17:12:25.921573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f7da8 00:27:47.297 [2024-05-15 17:12:25.922894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.297 [2024-05-15 17:12:25.922910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.297 [2024-05-15 17:12:25.932658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e73e0 00:27:47.297 [2024-05-15 17:12:25.933960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.297 [2024-05-15 17:12:25.933976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:47.297 [2024-05-15 17:12:25.947207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e6738 00:27:47.297 [2024-05-15 17:12:25.949316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.297 [2024-05-15 17:12:25.949332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:47.297 [2024-05-15 17:12:25.957788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e4578 00:27:47.297 [2024-05-15 17:12:25.959421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.297 [2024-05-15 17:12:25.959435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:47.297 [2024-05-15 17:12:25.967219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f0788 00:27:47.297 [2024-05-15 17:12:25.968228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.297 [2024-05-15 17:12:25.968243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:47.297 [2024-05-15 17:12:25.980639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f4b08 00:27:47.297 [2024-05-15 17:12:25.982268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.297 [2024-05-15 17:12:25.982284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:25.991225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190ea248 00:27:47.298 [2024-05-15 17:12:25.992337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:25.992352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:26.002540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f2948 00:27:47.298 [2024-05-15 17:12:26.003683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:26.003699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:26.015426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e12d8 00:27:47.298 [2024-05-15 17:12:26.016763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:26.016778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:26.027171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e23b8 00:27:47.298 [2024-05-15 17:12:26.028492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:26.028507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:26.038912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190ed4e8 00:27:47.298 [2024-05-15 17:12:26.040238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:26.040253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:26.050665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190ee5c8 00:27:47.298 [2024-05-15 17:12:26.051987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:26.052002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:26.062416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e7818 00:27:47.298 [2024-05-15 17:12:26.063747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:26.063763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:26.074140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e88f8 00:27:47.298 [2024-05-15 17:12:26.075458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:26.075473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:26.085871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e5658 00:27:47.298 [2024-05-15 17:12:26.087187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:26.087203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:26.097591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190e6b70 00:27:47.298 [2024-05-15 17:12:26.098907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:26.098923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:26.109305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f4298 00:27:47.298 [2024-05-15 17:12:26.110622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:26.110637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.298 [2024-05-15 17:12:26.121059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f31b8 00:27:47.298 [2024-05-15 17:12:26.122368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.298 [2024-05-15 17:12:26.122383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.559 [2024-05-15 17:12:26.132782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190ea680 00:27:47.559 [2024-05-15 17:12:26.134099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.559 [2024-05-15 17:12:26.134115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.559 [2024-05-15 17:12:26.143773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f0bc0 00:27:47.559 [2024-05-15 17:12:26.145074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.559 [2024-05-15 17:12:26.145089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:47.559 [2024-05-15 17:12:26.156282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f1ca0 00:27:47.559 [2024-05-15 17:12:26.157543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.559 [2024-05-15 17:12:26.157562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.559 [2024-05-15 17:12:26.168043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190f2d80 00:27:47.559 [2024-05-15 17:12:26.169348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.559 [2024-05-15 17:12:26.169365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.559 [2024-05-15 17:12:26.179817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d560) with pdu=0x2000190de8a8 00:27:47.559 [2024-05-15 17:12:26.181118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.559 [2024-05-15 17:12:26.181134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.559 00:27:47.559 Latency(us) 00:27:47.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.559 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:47.559 nvme0n1 : 2.01 21738.41 84.92 0.00 0.00 5880.32 2225.49 14199.47 00:27:47.559 =================================================================================================================== 00:27:47.559 Total : 21738.41 84.92 0.00 0.00 5880.32 2225.49 14199.47 00:27:47.559 0 00:27:47.559 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:47.559 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:47.559 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:47.559 | .driver_specific 00:27:47.559 | .nvme_error 00:27:47.559 | .status_code 00:27:47.559 | .command_transient_transport_error' 00:27:47.559 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:47.559 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:27:47.559 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1632455 00:27:47.559 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1632455 ']' 00:27:47.559 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1632455 00:27:47.559 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:47.559 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:47.559 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1632455 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1632455' 00:27:47.821 killing process with pid 1632455 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1632455 00:27:47.821 Received shutdown signal, test time was about 2.000000 seconds 00:27:47.821 00:27:47.821 Latency(us) 00:27:47.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.821 =================================================================================================================== 00:27:47.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1632455 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1633153 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1633153 /var/tmp/bperf.sock 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1633153 ']' 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:47.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:47.821 17:12:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.821 [2024-05-15 17:12:26.590857] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:27:47.821 [2024-05-15 17:12:26.590914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633153 ] 00:27:47.821 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:47.821 Zero copy mechanism will not be used. 00:27:47.821 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.083 [2024-05-15 17:12:26.664512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.083 [2024-05-15 17:12:26.717690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.656 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:48.656 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:48.656 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:48.656 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:48.917 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:48.917 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.917 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.917 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.917 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.917 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.917 nvme0n1 00:27:49.178 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:49.178 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.178 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.178 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.178 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:49.178 17:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:49.178 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:49.178 Zero copy mechanism will not be used. 00:27:49.178 Running I/O for 2 seconds... 00:27:49.178 [2024-05-15 17:12:27.855234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.855556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.855583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.863306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.863397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.863415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.872325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.872557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.872576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.881706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.882000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.882018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.891706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.892109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.892126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.902598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.902956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.902973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.913932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.914224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.914240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.924662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.925015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.925032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.934947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.935267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.935283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.945578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.945907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.945923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.956010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.956307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.956324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.966543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.966947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.966964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.976956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.977248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.977264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.988451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.988775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.988792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:27.999460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:27.999786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:27.999803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.178 [2024-05-15 17:12:28.010425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.178 [2024-05-15 17:12:28.010719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.178 [2024-05-15 17:12:28.010737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.440 [2024-05-15 17:12:28.020646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.440 [2024-05-15 17:12:28.020959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.440 [2024-05-15 17:12:28.020977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.440 [2024-05-15 17:12:28.031510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.440 [2024-05-15 17:12:28.031810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.440 [2024-05-15 17:12:28.031826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.440 [2024-05-15 17:12:28.042292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.440 [2024-05-15 17:12:28.042512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.440 [2024-05-15 17:12:28.042528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.440 [2024-05-15 17:12:28.049331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.440 [2024-05-15 17:12:28.049625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.440 [2024-05-15 17:12:28.049641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.440 [2024-05-15 17:12:28.056238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.440 [2024-05-15 17:12:28.056537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.440 [2024-05-15 17:12:28.056557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.440 [2024-05-15 17:12:28.062704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.440 [2024-05-15 17:12:28.062993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.440 [2024-05-15 17:12:28.063009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.440 [2024-05-15 17:12:28.067868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.440 [2024-05-15 17:12:28.068158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.440 [2024-05-15 17:12:28.068174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.440 [2024-05-15 17:12:28.075335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.440 [2024-05-15 17:12:28.075628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.440 [2024-05-15 17:12:28.075644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.440 [2024-05-15 17:12:28.083282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.440 [2024-05-15 17:12:28.083572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.440 [2024-05-15 17:12:28.083588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.440 [2024-05-15 17:12:28.091528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.091759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.091778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.099307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.099612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.099629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.107169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.107494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.107510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.112902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.113211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.113227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.123439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.123762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.123778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.131659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.131778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.131792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.140522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.140772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.140788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.148244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.148538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.148558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.157702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.158008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.158024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.165556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.165862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.165878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.174207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.174518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.174534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.183171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.183490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.183506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.192568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.192888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.192904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.201895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.202196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.202212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.210364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.210667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.210684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.218163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.218463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.218479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.227600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.227929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.227944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.235820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.236119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.236138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.244287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.244515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.244531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.250147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.250455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.250471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.256877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.257177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.257193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.262999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.263352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.263368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.441 [2024-05-15 17:12:28.269932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.441 [2024-05-15 17:12:28.270226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.441 [2024-05-15 17:12:28.270242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.279420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.279712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.279728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.287672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.287747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.287761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.297638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.297972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.297988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.307576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.307891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.307907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.317809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.318107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.318123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.327704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.328035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.328051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.337426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.337731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.337747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.348828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.349142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.349158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.358098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.358215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.358229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.368389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.368701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.368718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.376782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.377086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.377102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.383315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.383401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.383416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.392859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.393175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.393191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.403834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.404151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.404167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.414290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.414598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.414615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.424955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.425279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.425295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.434445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.434782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.434798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.443731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.444039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.444055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.451888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.452181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.452197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.460149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.460440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.460456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.467807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.468126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.468146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.476739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.477060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.703 [2024-05-15 17:12:28.477076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.703 [2024-05-15 17:12:28.488723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.703 [2024-05-15 17:12:28.489043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.704 [2024-05-15 17:12:28.489059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.704 [2024-05-15 17:12:28.499383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.704 [2024-05-15 17:12:28.499728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.704 [2024-05-15 17:12:28.499745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.704 [2024-05-15 17:12:28.509209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.704 [2024-05-15 17:12:28.509561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.704 [2024-05-15 17:12:28.509577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.704 [2024-05-15 17:12:28.518577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.704 [2024-05-15 17:12:28.518881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.704 [2024-05-15 17:12:28.518897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.704 [2024-05-15 17:12:28.526330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.704 [2024-05-15 17:12:28.526671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.704 [2024-05-15 17:12:28.526687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.704 [2024-05-15 17:12:28.534798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.704 [2024-05-15 17:12:28.535137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.704 [2024-05-15 17:12:28.535154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.544583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.544881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.544897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.552241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.552540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.552561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.561706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.562024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.562040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.571856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.572162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.572178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.581177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.581470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.581486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.591040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.591353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.591369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.599898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.600202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.600219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.606080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.606410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.606426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.612300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.612609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.612624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.619922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.620204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.620221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.628257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.628562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.628579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.635393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.635706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.635722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.643600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.643934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.643951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.652037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.652374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.652390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.660022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.660354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.660369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.668129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.668211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.668224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.676843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.677140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.677156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.684838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.685153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.685169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.690991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.691296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.691316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.700829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.701164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.701180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.710762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.711078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.711094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.966 [2024-05-15 17:12:28.721580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.966 [2024-05-15 17:12:28.721911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.966 [2024-05-15 17:12:28.721927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.967 [2024-05-15 17:12:28.731304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.967 [2024-05-15 17:12:28.731525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.967 [2024-05-15 17:12:28.731540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.967 [2024-05-15 17:12:28.742460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.967 [2024-05-15 17:12:28.742633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.967 [2024-05-15 17:12:28.742648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.967 [2024-05-15 17:12:28.752619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.967 [2024-05-15 17:12:28.752919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.967 [2024-05-15 17:12:28.752935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.967 [2024-05-15 17:12:28.762897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.967 [2024-05-15 17:12:28.763203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.967 [2024-05-15 17:12:28.763219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.967 [2024-05-15 17:12:28.773575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.967 [2024-05-15 17:12:28.773665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.967 [2024-05-15 17:12:28.773679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.967 [2024-05-15 17:12:28.785300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.967 [2024-05-15 17:12:28.785639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.967 [2024-05-15 17:12:28.785656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.967 [2024-05-15 17:12:28.794909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:49.967 [2024-05-15 17:12:28.795258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.967 [2024-05-15 17:12:28.795274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.805917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.806236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.806253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.816571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.816883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.816898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.827476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.827821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.827836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.837464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.837772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.837789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.848003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.848342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.848358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.858616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.858958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.858974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.867556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.867801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.867817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.873094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.873425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.873440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.881134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.881434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.881450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.888080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.888372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.888388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.893778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.894013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.894029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.901495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.901832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.901849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.909354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.909589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.909605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.917215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.917515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.917531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.922357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.922667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.922683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.928010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.928329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.928348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.934775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.935069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.935085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.941035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.941255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.941271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.949301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.949626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.949641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.958556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.958643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.958657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.964816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.965185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.965201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.970803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.971079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.971095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.977795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.978092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.978108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.983429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.983675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.983691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.988822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.989092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.989108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:28.995080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:28.995466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:28.995482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:29.001490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:29.001867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:29.001883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:29.009389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:29.009599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:29.009615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:29.017328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:29.017645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:29.017660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:29.024448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:29.024775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:29.024791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:29.030573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:29.030907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:29.030923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:29.035940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:29.036223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:29.036239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:29.043229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:29.043511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:29.043530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:29.049347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:29.049554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:29.049569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.229 [2024-05-15 17:12:29.058712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.229 [2024-05-15 17:12:29.059084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.229 [2024-05-15 17:12:29.059099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.068774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.069176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.069192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.074867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.075220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.075236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.080969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.081255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.081271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.086528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.086903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.086919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.094667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.095045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.095061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.102901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.103257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.103274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.111661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.111875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.111891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.120241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.120608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.120624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.127872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.128148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.128164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.133307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.133674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.133691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.138524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.138743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.138759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.143033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.143319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.143335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.150301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.150684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.150700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.159323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.159664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.159686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.168761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.169087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.169103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.177999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.178398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.178414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.188388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.188667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.188683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.197400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.197746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.197763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.207451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.207958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.207974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.218717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.491 [2024-05-15 17:12:29.219122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.491 [2024-05-15 17:12:29.219138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.491 [2024-05-15 17:12:29.229907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.230378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.230394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.492 [2024-05-15 17:12:29.239794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.240269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.240285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.492 [2024-05-15 17:12:29.249823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.250212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.250229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.492 [2024-05-15 17:12:29.259723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.260095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.260114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.492 [2024-05-15 17:12:29.268413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.268821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.268838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.492 [2024-05-15 17:12:29.277938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.278285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.278301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.492 [2024-05-15 17:12:29.287146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.287543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.287563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.492 [2024-05-15 17:12:29.294858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.295193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.295209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.492 [2024-05-15 17:12:29.302306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.302516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.302532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.492 [2024-05-15 17:12:29.306856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.307163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.307179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.492 [2024-05-15 17:12:29.313744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.314044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.314060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.492 [2024-05-15 17:12:29.320002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.492 [2024-05-15 17:12:29.320208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.492 [2024-05-15 17:12:29.320224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.754 [2024-05-15 17:12:29.327010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.754 [2024-05-15 17:12:29.327315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.754 [2024-05-15 17:12:29.327332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.754 [2024-05-15 17:12:29.336214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.754 [2024-05-15 17:12:29.336562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.754 [2024-05-15 17:12:29.336578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.754 [2024-05-15 17:12:29.343270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.754 [2024-05-15 17:12:29.343532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.754 [2024-05-15 17:12:29.343553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.754 [2024-05-15 17:12:29.350055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.754 [2024-05-15 17:12:29.350387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.754 [2024-05-15 17:12:29.350403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.754 [2024-05-15 17:12:29.358081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.754 [2024-05-15 17:12:29.358400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.754 [2024-05-15 17:12:29.358416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.754 [2024-05-15 17:12:29.363244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.754 [2024-05-15 17:12:29.363454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.754 [2024-05-15 17:12:29.363470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.754 [2024-05-15 17:12:29.370113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.754 [2024-05-15 17:12:29.370490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.370505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.377369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.377648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.377664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.384941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.385235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.385251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.391995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.392375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.392391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.401016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.401337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.401354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.406798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.407004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.407020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.413742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.414012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.414028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.420817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.421139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.421155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.427294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.427472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.427487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.432097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.432400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.432416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.437351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.437632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.437648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.443425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.443701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.443720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.448277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.448448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.448464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.453713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.453983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.453999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.460538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.460713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.460729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.467139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.467559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.467574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.475541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.475724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.475741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.482769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.483153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.483169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.487588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.487756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.487772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.495494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.495783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.495799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.502214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.502536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.502556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.506422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.506609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.506625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.510109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.510290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.510307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.514157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.514338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.514354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.520135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.520311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.520327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.524837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.525159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.525175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.530093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.530266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.530282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.534132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.534310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.755 [2024-05-15 17:12:29.534326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.755 [2024-05-15 17:12:29.537825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.755 [2024-05-15 17:12:29.537997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.756 [2024-05-15 17:12:29.538015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.756 [2024-05-15 17:12:29.541335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.756 [2024-05-15 17:12:29.541505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.756 [2024-05-15 17:12:29.541521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.756 [2024-05-15 17:12:29.545362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.756 [2024-05-15 17:12:29.545528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.756 [2024-05-15 17:12:29.545549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.756 [2024-05-15 17:12:29.552559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.756 [2024-05-15 17:12:29.552811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.756 [2024-05-15 17:12:29.552827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.756 [2024-05-15 17:12:29.560455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.756 [2024-05-15 17:12:29.560766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.756 [2024-05-15 17:12:29.560782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.756 [2024-05-15 17:12:29.568125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.756 [2024-05-15 17:12:29.568295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.756 [2024-05-15 17:12:29.568311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.756 [2024-05-15 17:12:29.574662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.756 [2024-05-15 17:12:29.575073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.756 [2024-05-15 17:12:29.575089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.756 [2024-05-15 17:12:29.584260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:50.756 [2024-05-15 17:12:29.584684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.756 [2024-05-15 17:12:29.584700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.018 [2024-05-15 17:12:29.592594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.018 [2024-05-15 17:12:29.593024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.018 [2024-05-15 17:12:29.593039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.018 [2024-05-15 17:12:29.601575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.018 [2024-05-15 17:12:29.601863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.018 [2024-05-15 17:12:29.601884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.018 [2024-05-15 17:12:29.610049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.018 [2024-05-15 17:12:29.610346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.018 [2024-05-15 17:12:29.610362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.018 [2024-05-15 17:12:29.617636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.018 [2024-05-15 17:12:29.618048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.018 [2024-05-15 17:12:29.618064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.018 [2024-05-15 17:12:29.625129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.625431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.625447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.632107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.632518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.632534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.640397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.640569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.640589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.647770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.647951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.647967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.651963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.652145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.652161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.657184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.657479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.657495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.662780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.663021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.663037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.667977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.668145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.668160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.671811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.671988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.672004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.675341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.675512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.675529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.679872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.680198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.680214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.687665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.687835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.687851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.691234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.691407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.691424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.694910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.695085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.695101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.699859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.700062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.700081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.705145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.705312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.705328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.709109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.709282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.709298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.712787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.712955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.712971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.719752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.720019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.720035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.723425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.723591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.723607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.727895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.728053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.728069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.733225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.733382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.733397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.736872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.737031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.737047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.740918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.741088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.741103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.744475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.744648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.744664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.748602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.748770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.748785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.752119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.752281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.752297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.755593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.019 [2024-05-15 17:12:29.755752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.019 [2024-05-15 17:12:29.755768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.019 [2024-05-15 17:12:29.759764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.759919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.759934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.020 [2024-05-15 17:12:29.765759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.765914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.765930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.020 [2024-05-15 17:12:29.770521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.770682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.770698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.020 [2024-05-15 17:12:29.774168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.774342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.774358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.020 [2024-05-15 17:12:29.778508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.778759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.778775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.020 [2024-05-15 17:12:29.787646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.787843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.787859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.020 [2024-05-15 17:12:29.795965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.796269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.796284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.020 [2024-05-15 17:12:29.804649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.805074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.805090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.020 [2024-05-15 17:12:29.811955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.812273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.812289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:51.020 [2024-05-15 17:12:29.822430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.822686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.822702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.020 [2024-05-15 17:12:29.833093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.833544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.833564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.020 [2024-05-15 17:12:29.843946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x169d8a0) with pdu=0x2000190fef90 00:27:51.020 [2024-05-15 17:12:29.844256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.020 [2024-05-15 17:12:29.844272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:51.283 00:27:51.283 Latency(us) 00:27:51.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.283 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:51.283 nvme0n1 : 2.01 3999.26 499.91 0.00 0.00 3992.86 1624.75 11851.09 00:27:51.283 =================================================================================================================== 00:27:51.283 Total : 3999.26 499.91 0.00 0.00 3992.86 1624.75 11851.09 00:27:51.283 0 00:27:51.283 17:12:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:51.283 17:12:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:51.283 17:12:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:51.283 | .driver_specific 00:27:51.283 | .nvme_error 00:27:51.283 | .status_code 00:27:51.283 | .command_transient_transport_error' 00:27:51.283 17:12:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 258 > 0 )) 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1633153 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1633153 ']' 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1633153 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1633153 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1633153' 00:27:51.283 killing process with pid 1633153 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1633153 00:27:51.283 Received shutdown signal, test time was about 2.000000 seconds 00:27:51.283 00:27:51.283 Latency(us) 00:27:51.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.283 =================================================================================================================== 00:27:51.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.283 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1633153 00:27:51.545 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1630853 00:27:51.545 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1630853 ']' 00:27:51.545 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1630853 00:27:51.545 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:51.545 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:51.545 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1630853 00:27:51.545 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:51.545 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:51.545 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1630853' 00:27:51.545 killing process with pid 1630853 00:27:51.545 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1630853 00:27:51.545 [2024-05-15 17:12:30.266518] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:51.545 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1630853 00:27:51.805 00:27:51.806 real 0m16.111s 00:27:51.806 user 0m31.694s 00:27:51.806 sys 0m3.335s 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:51.806 ************************************ 00:27:51.806 END TEST nvmf_digest_error 00:27:51.806 ************************************ 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:51.806 rmmod nvme_tcp 00:27:51.806 rmmod nvme_fabrics 00:27:51.806 rmmod nvme_keyring 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1630853 ']' 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1630853 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 1630853 ']' 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 1630853 00:27:51.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1630853) - No such process 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 1630853 is not found' 00:27:51.806 Process with pid 1630853 is not found 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.806 17:12:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.353 17:12:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:54.353 00:27:54.353 real 0m41.825s 00:27:54.353 user 1m5.725s 00:27:54.353 sys 0m11.943s 00:27:54.353 17:12:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:54.353 17:12:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:54.353 ************************************ 00:27:54.353 END TEST nvmf_digest 00:27:54.353 ************************************ 00:27:54.353 17:12:32 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:27:54.353 17:12:32 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:27:54.353 17:12:32 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:27:54.353 17:12:32 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:54.353 17:12:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:54.353 17:12:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:54.353 17:12:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:54.353 ************************************ 00:27:54.353 START TEST nvmf_bdevperf 00:27:54.353 ************************************ 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:54.353 * Looking for test storage... 00:27:54.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:54.353 17:12:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:00.945 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:00.945 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:00.945 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.945 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:00.945 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:00.946 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.207 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.207 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.207 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:01.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:28:01.207 00:28:01.207 --- 10.0.0.2 ping statistics --- 00:28:01.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.207 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:28:01.207 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:28:01.208 00:28:01.208 --- 10.0.0.1 ping statistics --- 00:28:01.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.208 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1638088 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1638088 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1638088 ']' 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:01.208 17:12:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:01.208 [2024-05-15 17:12:39.941570] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:28:01.208 [2024-05-15 17:12:39.941636] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.208 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.208 [2024-05-15 17:12:40.029614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:01.469 [2024-05-15 17:12:40.132592] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.469 [2024-05-15 17:12:40.132651] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.469 [2024-05-15 17:12:40.132659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.469 [2024-05-15 17:12:40.132666] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.470 [2024-05-15 17:12:40.132672] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.470 [2024-05-15 17:12:40.132829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.470 [2024-05-15 17:12:40.133114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.470 [2024-05-15 17:12:40.133116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.041 [2024-05-15 17:12:40.766381] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.041 Malloc0 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.041 [2024-05-15 17:12:40.837174] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:02.041 [2024-05-15 17:12:40.837384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.041 { 00:28:02.041 "params": { 00:28:02.041 "name": "Nvme$subsystem", 00:28:02.041 "trtype": "$TEST_TRANSPORT", 00:28:02.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.041 "adrfam": "ipv4", 00:28:02.041 "trsvcid": "$NVMF_PORT", 00:28:02.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.041 "hdgst": ${hdgst:-false}, 00:28:02.041 "ddgst": ${ddgst:-false} 00:28:02.041 }, 00:28:02.041 "method": "bdev_nvme_attach_controller" 00:28:02.041 } 00:28:02.041 EOF 00:28:02.041 )") 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:02.041 17:12:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:02.041 "params": { 00:28:02.041 "name": "Nvme1", 00:28:02.041 "trtype": "tcp", 00:28:02.041 "traddr": "10.0.0.2", 00:28:02.041 "adrfam": "ipv4", 00:28:02.041 "trsvcid": "4420", 00:28:02.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:02.041 "hdgst": false, 00:28:02.041 "ddgst": false 00:28:02.041 }, 00:28:02.041 "method": "bdev_nvme_attach_controller" 00:28:02.041 }' 00:28:02.303 [2024-05-15 17:12:40.887142] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:28:02.303 [2024-05-15 17:12:40.887193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638142 ] 00:28:02.303 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.303 [2024-05-15 17:12:40.945861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.303 [2024-05-15 17:12:41.010288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.565 Running I/O for 1 seconds... 00:28:03.509 00:28:03.509 Latency(us) 00:28:03.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.509 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:03.509 Verification LBA range: start 0x0 length 0x4000 00:28:03.509 Nvme1n1 : 1.00 8909.06 34.80 0.00 0.00 14302.44 1099.09 16165.55 00:28:03.509 =================================================================================================================== 00:28:03.509 Total : 8909.06 34.80 0.00 0.00 14302.44 1099.09 16165.55 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1638457 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:03.771 { 00:28:03.771 "params": { 00:28:03.771 "name": "Nvme$subsystem", 00:28:03.771 "trtype": "$TEST_TRANSPORT", 00:28:03.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.771 "adrfam": "ipv4", 00:28:03.771 "trsvcid": "$NVMF_PORT", 00:28:03.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.771 "hdgst": ${hdgst:-false}, 00:28:03.771 "ddgst": ${ddgst:-false} 00:28:03.771 }, 00:28:03.771 "method": "bdev_nvme_attach_controller" 00:28:03.771 } 00:28:03.771 EOF 00:28:03.771 )") 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:03.771 17:12:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:03.771 "params": { 00:28:03.771 "name": "Nvme1", 00:28:03.771 "trtype": "tcp", 00:28:03.771 "traddr": "10.0.0.2", 00:28:03.771 "adrfam": "ipv4", 00:28:03.771 "trsvcid": "4420", 00:28:03.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.771 "hdgst": false, 00:28:03.771 "ddgst": false 00:28:03.771 }, 00:28:03.771 "method": "bdev_nvme_attach_controller" 00:28:03.771 }' 00:28:03.771 [2024-05-15 17:12:42.463032] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:28:03.771 [2024-05-15 17:12:42.463087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638457 ] 00:28:03.771 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.771 [2024-05-15 17:12:42.522345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.771 [2024-05-15 17:12:42.586452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.342 Running I/O for 15 seconds... 00:28:06.932 17:12:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1638088 00:28:06.932 17:12:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:06.932 [2024-05-15 17:12:45.431050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.933 [2024-05-15 17:12:45.431091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.933 [2024-05-15 17:12:45.431894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.933 [2024-05-15 17:12:45.431901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.431910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.431917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.431926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.431934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.431943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.431951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.431960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.431967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.431976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.431983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.431992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.431999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.934 [2024-05-15 17:12:45.432568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.934 [2024-05-15 17:12:45.432575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.432950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.935 [2024-05-15 17:12:45.432966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.935 [2024-05-15 17:12:45.432984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.432993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.935 [2024-05-15 17:12:45.433000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.935 [2024-05-15 17:12:45.433016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.935 [2024-05-15 17:12:45.433032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.935 [2024-05-15 17:12:45.433048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.935 [2024-05-15 17:12:45.433065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.433081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.433096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.433113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.433128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.433144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.433160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.433177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.433194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.935 [2024-05-15 17:12:45.433210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.935 [2024-05-15 17:12:45.433225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.935 [2024-05-15 17:12:45.433241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.935 [2024-05-15 17:12:45.433250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.936 [2024-05-15 17:12:45.433257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.936 [2024-05-15 17:12:45.433267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.936 [2024-05-15 17:12:45.433274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.936 [2024-05-15 17:12:45.433283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.936 [2024-05-15 17:12:45.433292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.936 [2024-05-15 17:12:45.433301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.936 [2024-05-15 17:12:45.433308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.936 [2024-05-15 17:12:45.433317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1498990 is same with the state(5) to be set 00:28:06.936 [2024-05-15 17:12:45.433326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:06.936 [2024-05-15 17:12:45.433332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:06.936 [2024-05-15 17:12:45.433338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94888 len:8 PRP1 0x0 PRP2 0x0 00:28:06.936 [2024-05-15 17:12:45.433346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.936 [2024-05-15 17:12:45.433384] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1498990 was disconnected and freed. reset controller. 00:28:06.936 [2024-05-15 17:12:45.436928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.936 [2024-05-15 17:12:45.436975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.936 [2024-05-15 17:12:45.437831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.438222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.438239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.936 [2024-05-15 17:12:45.438249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.936 [2024-05-15 17:12:45.438491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.936 [2024-05-15 17:12:45.438720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.936 [2024-05-15 17:12:45.438730] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.936 [2024-05-15 17:12:45.438738] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.936 [2024-05-15 17:12:45.442280] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.936 [2024-05-15 17:12:45.451063] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.936 [2024-05-15 17:12:45.451678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.452113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.452126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.936 [2024-05-15 17:12:45.452136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.936 [2024-05-15 17:12:45.452375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.936 [2024-05-15 17:12:45.452606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.936 [2024-05-15 17:12:45.452615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.936 [2024-05-15 17:12:45.452623] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.936 [2024-05-15 17:12:45.456167] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.936 [2024-05-15 17:12:45.464938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.936 [2024-05-15 17:12:45.465617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.466030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.466043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.936 [2024-05-15 17:12:45.466052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.936 [2024-05-15 17:12:45.466291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.936 [2024-05-15 17:12:45.466513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.936 [2024-05-15 17:12:45.466521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.936 [2024-05-15 17:12:45.466528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.936 [2024-05-15 17:12:45.470084] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.936 [2024-05-15 17:12:45.478841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.936 [2024-05-15 17:12:45.479512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.479899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.479912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.936 [2024-05-15 17:12:45.479926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.936 [2024-05-15 17:12:45.480164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.936 [2024-05-15 17:12:45.480386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.936 [2024-05-15 17:12:45.480394] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.936 [2024-05-15 17:12:45.480401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.936 [2024-05-15 17:12:45.483952] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.936 [2024-05-15 17:12:45.492714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.936 [2024-05-15 17:12:45.493400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.493828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.493843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.936 [2024-05-15 17:12:45.493852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.936 [2024-05-15 17:12:45.494090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.936 [2024-05-15 17:12:45.494312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.936 [2024-05-15 17:12:45.494320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.936 [2024-05-15 17:12:45.494327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.936 [2024-05-15 17:12:45.497871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.936 [2024-05-15 17:12:45.506643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.936 [2024-05-15 17:12:45.507308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.507604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.507618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.936 [2024-05-15 17:12:45.507628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.936 [2024-05-15 17:12:45.507866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.936 [2024-05-15 17:12:45.508089] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.936 [2024-05-15 17:12:45.508097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.936 [2024-05-15 17:12:45.508104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.936 [2024-05-15 17:12:45.511654] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.936 [2024-05-15 17:12:45.520660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.936 [2024-05-15 17:12:45.521307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.521653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.521667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.936 [2024-05-15 17:12:45.521677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.936 [2024-05-15 17:12:45.521919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.936 [2024-05-15 17:12:45.522142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.936 [2024-05-15 17:12:45.522150] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.936 [2024-05-15 17:12:45.522157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.936 [2024-05-15 17:12:45.525703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.936 [2024-05-15 17:12:45.534464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.936 [2024-05-15 17:12:45.535161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.535409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.936 [2024-05-15 17:12:45.535422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.936 [2024-05-15 17:12:45.535431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.936 [2024-05-15 17:12:45.535678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.936 [2024-05-15 17:12:45.535901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.936 [2024-05-15 17:12:45.535909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.936 [2024-05-15 17:12:45.535916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.936 [2024-05-15 17:12:45.539461] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.936 [2024-05-15 17:12:45.548426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.936 [2024-05-15 17:12:45.549071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.549414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.549426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.937 [2024-05-15 17:12:45.549435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.937 [2024-05-15 17:12:45.549683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.937 [2024-05-15 17:12:45.549906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.937 [2024-05-15 17:12:45.549913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.937 [2024-05-15 17:12:45.549921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.937 [2024-05-15 17:12:45.553463] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.937 [2024-05-15 17:12:45.562231] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.937 [2024-05-15 17:12:45.562854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.563094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.563108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.937 [2024-05-15 17:12:45.563117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.937 [2024-05-15 17:12:45.563355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.937 [2024-05-15 17:12:45.563591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.937 [2024-05-15 17:12:45.563600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.937 [2024-05-15 17:12:45.563607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.937 [2024-05-15 17:12:45.567159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.937 [2024-05-15 17:12:45.576141] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.937 [2024-05-15 17:12:45.576835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.577175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.577187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.937 [2024-05-15 17:12:45.577196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.937 [2024-05-15 17:12:45.577435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.937 [2024-05-15 17:12:45.577665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.937 [2024-05-15 17:12:45.577675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.937 [2024-05-15 17:12:45.577682] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.937 [2024-05-15 17:12:45.581227] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.937 [2024-05-15 17:12:45.589994] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.937 [2024-05-15 17:12:45.590629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.590971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.590983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.937 [2024-05-15 17:12:45.590993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.937 [2024-05-15 17:12:45.591231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.937 [2024-05-15 17:12:45.591453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.937 [2024-05-15 17:12:45.591461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.937 [2024-05-15 17:12:45.591468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.937 [2024-05-15 17:12:45.595023] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.937 [2024-05-15 17:12:45.603794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.937 [2024-05-15 17:12:45.604341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.604659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.604673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.937 [2024-05-15 17:12:45.604683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.937 [2024-05-15 17:12:45.604922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.937 [2024-05-15 17:12:45.605144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.937 [2024-05-15 17:12:45.605158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.937 [2024-05-15 17:12:45.605166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.937 [2024-05-15 17:12:45.608715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.937 [2024-05-15 17:12:45.617689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.937 [2024-05-15 17:12:45.618372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.618721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.618735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.937 [2024-05-15 17:12:45.618744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.937 [2024-05-15 17:12:45.618982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.937 [2024-05-15 17:12:45.619204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.937 [2024-05-15 17:12:45.619212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.937 [2024-05-15 17:12:45.619220] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.937 [2024-05-15 17:12:45.622765] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.937 [2024-05-15 17:12:45.631531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.937 [2024-05-15 17:12:45.632114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.632426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.632439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.937 [2024-05-15 17:12:45.632448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.937 [2024-05-15 17:12:45.632696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.937 [2024-05-15 17:12:45.632920] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.937 [2024-05-15 17:12:45.632928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.937 [2024-05-15 17:12:45.632935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.937 [2024-05-15 17:12:45.636477] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.937 [2024-05-15 17:12:45.645449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.937 [2024-05-15 17:12:45.646070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.646414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.937 [2024-05-15 17:12:45.646427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.937 [2024-05-15 17:12:45.646436] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.937 [2024-05-15 17:12:45.646682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.937 [2024-05-15 17:12:45.646905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.937 [2024-05-15 17:12:45.646913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.938 [2024-05-15 17:12:45.646925] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.938 [2024-05-15 17:12:45.650469] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.938 [2024-05-15 17:12:45.659238] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.938 [2024-05-15 17:12:45.659872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.660155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.660167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.938 [2024-05-15 17:12:45.660177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.938 [2024-05-15 17:12:45.660415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.938 [2024-05-15 17:12:45.660645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.938 [2024-05-15 17:12:45.660654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.938 [2024-05-15 17:12:45.660662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.938 [2024-05-15 17:12:45.664205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.938 [2024-05-15 17:12:45.673193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.938 [2024-05-15 17:12:45.673850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.674194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.674206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.938 [2024-05-15 17:12:45.674215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.938 [2024-05-15 17:12:45.674454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.938 [2024-05-15 17:12:45.674684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.938 [2024-05-15 17:12:45.674693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.938 [2024-05-15 17:12:45.674701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.938 [2024-05-15 17:12:45.678242] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.938 [2024-05-15 17:12:45.686998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.938 [2024-05-15 17:12:45.687613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.688010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.688022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.938 [2024-05-15 17:12:45.688031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.938 [2024-05-15 17:12:45.688270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.938 [2024-05-15 17:12:45.688491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.938 [2024-05-15 17:12:45.688499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.938 [2024-05-15 17:12:45.688507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.938 [2024-05-15 17:12:45.692062] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.938 [2024-05-15 17:12:45.700830] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.938 [2024-05-15 17:12:45.701509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.701955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.701968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.938 [2024-05-15 17:12:45.701977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.938 [2024-05-15 17:12:45.702216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.938 [2024-05-15 17:12:45.702438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.938 [2024-05-15 17:12:45.702446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.938 [2024-05-15 17:12:45.702453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.938 [2024-05-15 17:12:45.706002] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.938 [2024-05-15 17:12:45.714763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.938 [2024-05-15 17:12:45.715344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.715685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.715696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.938 [2024-05-15 17:12:45.715703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.938 [2024-05-15 17:12:45.715922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.938 [2024-05-15 17:12:45.716140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.938 [2024-05-15 17:12:45.716147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.938 [2024-05-15 17:12:45.716154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.938 [2024-05-15 17:12:45.719692] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.938 [2024-05-15 17:12:45.728690] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.938 [2024-05-15 17:12:45.729329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.729679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.729693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.938 [2024-05-15 17:12:45.729702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.938 [2024-05-15 17:12:45.729940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.938 [2024-05-15 17:12:45.730162] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.938 [2024-05-15 17:12:45.730176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.938 [2024-05-15 17:12:45.730184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.938 [2024-05-15 17:12:45.733730] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.938 [2024-05-15 17:12:45.742503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.938 [2024-05-15 17:12:45.743183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.743528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.743540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.938 [2024-05-15 17:12:45.743556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.938 [2024-05-15 17:12:45.743794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.938 [2024-05-15 17:12:45.744016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.938 [2024-05-15 17:12:45.744024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.938 [2024-05-15 17:12:45.744032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.938 [2024-05-15 17:12:45.747574] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.938 [2024-05-15 17:12:45.756334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.938 [2024-05-15 17:12:45.756987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.757326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.938 [2024-05-15 17:12:45.757338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:06.938 [2024-05-15 17:12:45.757347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:06.938 [2024-05-15 17:12:45.757594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:06.938 [2024-05-15 17:12:45.757817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.938 [2024-05-15 17:12:45.757825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.938 [2024-05-15 17:12:45.757833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.938 [2024-05-15 17:12:45.761374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.200 [2024-05-15 17:12:45.770146] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.200 [2024-05-15 17:12:45.770828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.200 [2024-05-15 17:12:45.771174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.200 [2024-05-15 17:12:45.771187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.200 [2024-05-15 17:12:45.771196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.200 [2024-05-15 17:12:45.771434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.200 [2024-05-15 17:12:45.771665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.200 [2024-05-15 17:12:45.771674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.200 [2024-05-15 17:12:45.771682] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.200 [2024-05-15 17:12:45.775223] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.200 [2024-05-15 17:12:45.783987] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.200 [2024-05-15 17:12:45.784610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.200 [2024-05-15 17:12:45.784871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.200 [2024-05-15 17:12:45.784883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.200 [2024-05-15 17:12:45.784892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.200 [2024-05-15 17:12:45.785130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.200 [2024-05-15 17:12:45.785352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.200 [2024-05-15 17:12:45.785361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.200 [2024-05-15 17:12:45.785368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.200 [2024-05-15 17:12:45.788920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.200 [2024-05-15 17:12:45.797892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.200 [2024-05-15 17:12:45.798530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.200 [2024-05-15 17:12:45.798903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.200 [2024-05-15 17:12:45.798915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.200 [2024-05-15 17:12:45.798924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.200 [2024-05-15 17:12:45.799162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.200 [2024-05-15 17:12:45.799384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.200 [2024-05-15 17:12:45.799392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.200 [2024-05-15 17:12:45.799399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.200 [2024-05-15 17:12:45.802947] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.200 [2024-05-15 17:12:45.811705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.200 [2024-05-15 17:12:45.812340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.200 [2024-05-15 17:12:45.812691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.200 [2024-05-15 17:12:45.812705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.200 [2024-05-15 17:12:45.812714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.200 [2024-05-15 17:12:45.812953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.200 [2024-05-15 17:12:45.813174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.200 [2024-05-15 17:12:45.813183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.200 [2024-05-15 17:12:45.813190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.200 [2024-05-15 17:12:45.816738] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.201 [2024-05-15 17:12:45.825507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.201 [2024-05-15 17:12:45.826150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.826496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.826509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.201 [2024-05-15 17:12:45.826518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.201 [2024-05-15 17:12:45.826765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.201 [2024-05-15 17:12:45.826987] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.201 [2024-05-15 17:12:45.826996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.201 [2024-05-15 17:12:45.827003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.201 [2024-05-15 17:12:45.830542] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.201 [2024-05-15 17:12:45.839306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.201 [2024-05-15 17:12:45.839873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.840227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.840240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.201 [2024-05-15 17:12:45.840249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.201 [2024-05-15 17:12:45.840487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.201 [2024-05-15 17:12:45.840717] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.201 [2024-05-15 17:12:45.840726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.201 [2024-05-15 17:12:45.840733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.201 [2024-05-15 17:12:45.844275] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.201 [2024-05-15 17:12:45.853244] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.201 [2024-05-15 17:12:45.853900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.854246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.854258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.201 [2024-05-15 17:12:45.854268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.201 [2024-05-15 17:12:45.854506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.201 [2024-05-15 17:12:45.854736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.201 [2024-05-15 17:12:45.854746] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.201 [2024-05-15 17:12:45.854754] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.201 [2024-05-15 17:12:45.858296] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.201 [2024-05-15 17:12:45.867069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.201 [2024-05-15 17:12:45.867663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.868024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.868037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.201 [2024-05-15 17:12:45.868050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.201 [2024-05-15 17:12:45.868289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.201 [2024-05-15 17:12:45.868511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.201 [2024-05-15 17:12:45.868519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.201 [2024-05-15 17:12:45.868526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.201 [2024-05-15 17:12:45.872079] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.201 [2024-05-15 17:12:45.881051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.201 [2024-05-15 17:12:45.881748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.882095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.882108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.201 [2024-05-15 17:12:45.882117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.201 [2024-05-15 17:12:45.882355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.201 [2024-05-15 17:12:45.882584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.201 [2024-05-15 17:12:45.882593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.201 [2024-05-15 17:12:45.882600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.201 [2024-05-15 17:12:45.886142] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.201 [2024-05-15 17:12:45.894914] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.201 [2024-05-15 17:12:45.895575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.895966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.895979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.201 [2024-05-15 17:12:45.895988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.201 [2024-05-15 17:12:45.896226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.201 [2024-05-15 17:12:45.896448] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.201 [2024-05-15 17:12:45.896464] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.201 [2024-05-15 17:12:45.896472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.201 [2024-05-15 17:12:45.900023] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.201 [2024-05-15 17:12:45.908787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.201 [2024-05-15 17:12:45.909471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.909821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.909835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.201 [2024-05-15 17:12:45.909844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.201 [2024-05-15 17:12:45.910087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.201 [2024-05-15 17:12:45.910309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.201 [2024-05-15 17:12:45.910318] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.201 [2024-05-15 17:12:45.910325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.201 [2024-05-15 17:12:45.913904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.201 [2024-05-15 17:12:45.922676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.201 [2024-05-15 17:12:45.923243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.923595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.923609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.201 [2024-05-15 17:12:45.923618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.201 [2024-05-15 17:12:45.923856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.201 [2024-05-15 17:12:45.924078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.201 [2024-05-15 17:12:45.924086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.201 [2024-05-15 17:12:45.924093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.201 [2024-05-15 17:12:45.927641] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.201 [2024-05-15 17:12:45.936651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.201 [2024-05-15 17:12:45.937333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.937692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.937706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.201 [2024-05-15 17:12:45.937715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.201 [2024-05-15 17:12:45.937954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.201 [2024-05-15 17:12:45.938176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.201 [2024-05-15 17:12:45.938184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.201 [2024-05-15 17:12:45.938191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.201 [2024-05-15 17:12:45.941741] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.201 [2024-05-15 17:12:45.950500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.201 [2024-05-15 17:12:45.951050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.951283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.201 [2024-05-15 17:12:45.951301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.201 [2024-05-15 17:12:45.951308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.201 [2024-05-15 17:12:45.951527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.202 [2024-05-15 17:12:45.951756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.202 [2024-05-15 17:12:45.951764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.202 [2024-05-15 17:12:45.951771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.202 [2024-05-15 17:12:45.955307] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.202 [2024-05-15 17:12:45.964485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.202 [2024-05-15 17:12:45.965137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.202 [2024-05-15 17:12:45.965477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.202 [2024-05-15 17:12:45.965490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.202 [2024-05-15 17:12:45.965499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.202 [2024-05-15 17:12:45.965745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.202 [2024-05-15 17:12:45.965968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.202 [2024-05-15 17:12:45.965977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.202 [2024-05-15 17:12:45.965985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.202 [2024-05-15 17:12:45.969540] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.202 [2024-05-15 17:12:45.978311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.202 [2024-05-15 17:12:45.978973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.202 [2024-05-15 17:12:45.979318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.202 [2024-05-15 17:12:45.979330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.202 [2024-05-15 17:12:45.979340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.202 [2024-05-15 17:12:45.979586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.202 [2024-05-15 17:12:45.979809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.202 [2024-05-15 17:12:45.979817] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.202 [2024-05-15 17:12:45.979825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.202 [2024-05-15 17:12:45.983368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.202 [2024-05-15 17:12:45.992130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.202 [2024-05-15 17:12:45.992831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.202 [2024-05-15 17:12:45.993175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.202 [2024-05-15 17:12:45.993188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.202 [2024-05-15 17:12:45.993198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.202 [2024-05-15 17:12:45.993436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.202 [2024-05-15 17:12:45.993666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.202 [2024-05-15 17:12:45.993680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.202 [2024-05-15 17:12:45.993687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.202 [2024-05-15 17:12:45.997231] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.202 [2024-05-15 17:12:46.005997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.202 [2024-05-15 17:12:46.006734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.202 [2024-05-15 17:12:46.007079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.202 [2024-05-15 17:12:46.007091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.202 [2024-05-15 17:12:46.007101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.202 [2024-05-15 17:12:46.007339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.202 [2024-05-15 17:12:46.007569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.202 [2024-05-15 17:12:46.007579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.202 [2024-05-15 17:12:46.007594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.202 [2024-05-15 17:12:46.011140] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.202 [2024-05-15 17:12:46.019906] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.202 [2024-05-15 17:12:46.020412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.202 [2024-05-15 17:12:46.020767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.202 [2024-05-15 17:12:46.020782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.202 [2024-05-15 17:12:46.020791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.202 [2024-05-15 17:12:46.021029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.202 [2024-05-15 17:12:46.021251] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.202 [2024-05-15 17:12:46.021260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.202 [2024-05-15 17:12:46.021267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.202 [2024-05-15 17:12:46.024814] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.465 [2024-05-15 17:12:46.033796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.465 [2024-05-15 17:12:46.034472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.034822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.034836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.465 [2024-05-15 17:12:46.034846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.465 [2024-05-15 17:12:46.035084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.465 [2024-05-15 17:12:46.035307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.465 [2024-05-15 17:12:46.035316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.465 [2024-05-15 17:12:46.035328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.465 [2024-05-15 17:12:46.038875] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.465 [2024-05-15 17:12:46.047648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.465 [2024-05-15 17:12:46.048195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.048555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.048566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.465 [2024-05-15 17:12:46.048574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.465 [2024-05-15 17:12:46.048793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.465 [2024-05-15 17:12:46.049011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.465 [2024-05-15 17:12:46.049021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.465 [2024-05-15 17:12:46.049027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.465 [2024-05-15 17:12:46.052570] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.465 [2024-05-15 17:12:46.061536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.465 [2024-05-15 17:12:46.062213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.062567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.062580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.465 [2024-05-15 17:12:46.062590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.465 [2024-05-15 17:12:46.062828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.465 [2024-05-15 17:12:46.063050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.465 [2024-05-15 17:12:46.063059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.465 [2024-05-15 17:12:46.063066] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.465 [2024-05-15 17:12:46.066624] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.465 [2024-05-15 17:12:46.075391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.465 [2024-05-15 17:12:46.075827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.076146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.076155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.465 [2024-05-15 17:12:46.076163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.465 [2024-05-15 17:12:46.076381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.465 [2024-05-15 17:12:46.076604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.465 [2024-05-15 17:12:46.076614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.465 [2024-05-15 17:12:46.076621] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.465 [2024-05-15 17:12:46.080167] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.465 [2024-05-15 17:12:46.089352] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.465 [2024-05-15 17:12:46.089902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.090230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.090240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.465 [2024-05-15 17:12:46.090247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.465 [2024-05-15 17:12:46.090465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.465 [2024-05-15 17:12:46.090692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.465 [2024-05-15 17:12:46.090701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.465 [2024-05-15 17:12:46.090708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.465 [2024-05-15 17:12:46.094244] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.465 [2024-05-15 17:12:46.103257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.465 [2024-05-15 17:12:46.103887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.104334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.104347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.465 [2024-05-15 17:12:46.104356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.465 [2024-05-15 17:12:46.104608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.465 [2024-05-15 17:12:46.104831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.465 [2024-05-15 17:12:46.104839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.465 [2024-05-15 17:12:46.104847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.465 [2024-05-15 17:12:46.108387] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.465 [2024-05-15 17:12:46.117152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.465 [2024-05-15 17:12:46.117835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.118081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.118094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.465 [2024-05-15 17:12:46.118104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.465 [2024-05-15 17:12:46.118342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.465 [2024-05-15 17:12:46.118573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.465 [2024-05-15 17:12:46.118582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.465 [2024-05-15 17:12:46.118589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.465 [2024-05-15 17:12:46.122130] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.465 [2024-05-15 17:12:46.131112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.465 [2024-05-15 17:12:46.131793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.465 [2024-05-15 17:12:46.132132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.132145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.466 [2024-05-15 17:12:46.132155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.466 [2024-05-15 17:12:46.132393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.466 [2024-05-15 17:12:46.132622] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.466 [2024-05-15 17:12:46.132631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.466 [2024-05-15 17:12:46.132638] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.466 [2024-05-15 17:12:46.136184] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.466 [2024-05-15 17:12:46.144986] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.466 [2024-05-15 17:12:46.145666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.146074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.146088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.466 [2024-05-15 17:12:46.146097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.466 [2024-05-15 17:12:46.146336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.466 [2024-05-15 17:12:46.146565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.466 [2024-05-15 17:12:46.146574] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.466 [2024-05-15 17:12:46.146582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.466 [2024-05-15 17:12:46.150125] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.466 [2024-05-15 17:12:46.158897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.466 [2024-05-15 17:12:46.159444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.159772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.159783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.466 [2024-05-15 17:12:46.159790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.466 [2024-05-15 17:12:46.160009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.466 [2024-05-15 17:12:46.160228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.466 [2024-05-15 17:12:46.160236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.466 [2024-05-15 17:12:46.160242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.466 [2024-05-15 17:12:46.163784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.466 [2024-05-15 17:12:46.172775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.466 [2024-05-15 17:12:46.173256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.173594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.173608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.466 [2024-05-15 17:12:46.173618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.466 [2024-05-15 17:12:46.173857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.466 [2024-05-15 17:12:46.174079] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.466 [2024-05-15 17:12:46.174087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.466 [2024-05-15 17:12:46.174094] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.466 [2024-05-15 17:12:46.177643] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.466 [2024-05-15 17:12:46.186622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.466 [2024-05-15 17:12:46.187173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.187513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.187527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.466 [2024-05-15 17:12:46.187537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.466 [2024-05-15 17:12:46.187781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.466 [2024-05-15 17:12:46.188004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.466 [2024-05-15 17:12:46.188012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.466 [2024-05-15 17:12:46.188019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.466 [2024-05-15 17:12:46.191563] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.466 [2024-05-15 17:12:46.200539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.466 [2024-05-15 17:12:46.201230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.201627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.201642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.466 [2024-05-15 17:12:46.201651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.466 [2024-05-15 17:12:46.201890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.466 [2024-05-15 17:12:46.202112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.466 [2024-05-15 17:12:46.202121] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.466 [2024-05-15 17:12:46.202128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.466 [2024-05-15 17:12:46.205677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.466 [2024-05-15 17:12:46.214440] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.466 [2024-05-15 17:12:46.214972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.215273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.215284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.466 [2024-05-15 17:12:46.215291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.466 [2024-05-15 17:12:46.215510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.466 [2024-05-15 17:12:46.215734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.466 [2024-05-15 17:12:46.215750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.466 [2024-05-15 17:12:46.215757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.466 [2024-05-15 17:12:46.219297] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.466 [2024-05-15 17:12:46.228267] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.466 [2024-05-15 17:12:46.228922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.229235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.229248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.466 [2024-05-15 17:12:46.229258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.466 [2024-05-15 17:12:46.229496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.466 [2024-05-15 17:12:46.229725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.466 [2024-05-15 17:12:46.229735] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.466 [2024-05-15 17:12:46.229742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.466 [2024-05-15 17:12:46.233287] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.466 [2024-05-15 17:12:46.242064] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.466 [2024-05-15 17:12:46.242663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.243008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.243021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.466 [2024-05-15 17:12:46.243030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.466 [2024-05-15 17:12:46.243269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.466 [2024-05-15 17:12:46.243491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.466 [2024-05-15 17:12:46.243499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.466 [2024-05-15 17:12:46.243506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.466 [2024-05-15 17:12:46.247057] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.466 [2024-05-15 17:12:46.256035] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.466 [2024-05-15 17:12:46.256583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.256930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.466 [2024-05-15 17:12:46.256940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.466 [2024-05-15 17:12:46.256954] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.466 [2024-05-15 17:12:46.257173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.467 [2024-05-15 17:12:46.257392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.467 [2024-05-15 17:12:46.257399] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.467 [2024-05-15 17:12:46.257406] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.467 [2024-05-15 17:12:46.260952] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.467 [2024-05-15 17:12:46.269943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.467 [2024-05-15 17:12:46.270620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.467 [2024-05-15 17:12:46.270991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.467 [2024-05-15 17:12:46.271004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.467 [2024-05-15 17:12:46.271013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.467 [2024-05-15 17:12:46.271252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.467 [2024-05-15 17:12:46.271474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.467 [2024-05-15 17:12:46.271482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.467 [2024-05-15 17:12:46.271489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.467 [2024-05-15 17:12:46.275042] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.467 [2024-05-15 17:12:46.283820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.467 [2024-05-15 17:12:46.284519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.467 [2024-05-15 17:12:46.284836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.467 [2024-05-15 17:12:46.284849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.467 [2024-05-15 17:12:46.284859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.467 [2024-05-15 17:12:46.285097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.467 [2024-05-15 17:12:46.285319] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.467 [2024-05-15 17:12:46.285328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.467 [2024-05-15 17:12:46.285336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.467 [2024-05-15 17:12:46.288886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.729 [2024-05-15 17:12:46.297721] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.729 [2024-05-15 17:12:46.298386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.298759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.298774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.729 [2024-05-15 17:12:46.298784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.729 [2024-05-15 17:12:46.299027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.729 [2024-05-15 17:12:46.299250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.729 [2024-05-15 17:12:46.299258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.729 [2024-05-15 17:12:46.299265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.729 [2024-05-15 17:12:46.302814] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.729 [2024-05-15 17:12:46.311579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.729 [2024-05-15 17:12:46.312126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.312464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.312473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.729 [2024-05-15 17:12:46.312481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.729 [2024-05-15 17:12:46.312704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.729 [2024-05-15 17:12:46.312924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.729 [2024-05-15 17:12:46.312931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.729 [2024-05-15 17:12:46.312938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.729 [2024-05-15 17:12:46.316475] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.729 [2024-05-15 17:12:46.325452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.729 [2024-05-15 17:12:46.326002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.326325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.326335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.729 [2024-05-15 17:12:46.326342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.729 [2024-05-15 17:12:46.326565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.729 [2024-05-15 17:12:46.326784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.729 [2024-05-15 17:12:46.326791] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.729 [2024-05-15 17:12:46.326798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.729 [2024-05-15 17:12:46.330335] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.729 [2024-05-15 17:12:46.339311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.729 [2024-05-15 17:12:46.339764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.340103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.340112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.729 [2024-05-15 17:12:46.340120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.729 [2024-05-15 17:12:46.340338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.729 [2024-05-15 17:12:46.340564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.729 [2024-05-15 17:12:46.340573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.729 [2024-05-15 17:12:46.340580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.729 [2024-05-15 17:12:46.344113] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.729 [2024-05-15 17:12:46.353325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.729 [2024-05-15 17:12:46.353847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.354160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.354170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.729 [2024-05-15 17:12:46.354177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.729 [2024-05-15 17:12:46.354395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.729 [2024-05-15 17:12:46.354618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.729 [2024-05-15 17:12:46.354627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.729 [2024-05-15 17:12:46.354634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.729 [2024-05-15 17:12:46.358175] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.729 [2024-05-15 17:12:46.367164] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.729 [2024-05-15 17:12:46.367721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.368040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.729 [2024-05-15 17:12:46.368050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.730 [2024-05-15 17:12:46.368058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.730 [2024-05-15 17:12:46.368276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.730 [2024-05-15 17:12:46.368495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.730 [2024-05-15 17:12:46.368502] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.730 [2024-05-15 17:12:46.368509] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.730 [2024-05-15 17:12:46.372050] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.730 [2024-05-15 17:12:46.381030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.730 [2024-05-15 17:12:46.381626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.381975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.381988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.730 [2024-05-15 17:12:46.381997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.730 [2024-05-15 17:12:46.382235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.730 [2024-05-15 17:12:46.382461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.730 [2024-05-15 17:12:46.382470] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.730 [2024-05-15 17:12:46.382477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.730 [2024-05-15 17:12:46.386027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.730 [2024-05-15 17:12:46.395006] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.730 [2024-05-15 17:12:46.395673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.396028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.396041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.730 [2024-05-15 17:12:46.396050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.730 [2024-05-15 17:12:46.396289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.730 [2024-05-15 17:12:46.396510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.730 [2024-05-15 17:12:46.396518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.730 [2024-05-15 17:12:46.396526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.730 [2024-05-15 17:12:46.400076] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.730 [2024-05-15 17:12:46.408852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.730 [2024-05-15 17:12:46.409527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.409902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.409916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.730 [2024-05-15 17:12:46.409926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.730 [2024-05-15 17:12:46.410164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.730 [2024-05-15 17:12:46.410385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.730 [2024-05-15 17:12:46.410393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.730 [2024-05-15 17:12:46.410401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.730 [2024-05-15 17:12:46.413949] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.730 [2024-05-15 17:12:46.422720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.730 [2024-05-15 17:12:46.423378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.423757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.423772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.730 [2024-05-15 17:12:46.423782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.730 [2024-05-15 17:12:46.424020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.730 [2024-05-15 17:12:46.424242] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.730 [2024-05-15 17:12:46.424250] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.730 [2024-05-15 17:12:46.424262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.730 [2024-05-15 17:12:46.427811] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.730 [2024-05-15 17:12:46.436579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.730 [2024-05-15 17:12:46.437181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.437559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.437572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.730 [2024-05-15 17:12:46.437582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.730 [2024-05-15 17:12:46.437820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.730 [2024-05-15 17:12:46.438043] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.730 [2024-05-15 17:12:46.438052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.730 [2024-05-15 17:12:46.438060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.730 [2024-05-15 17:12:46.441610] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.730 [2024-05-15 17:12:46.450381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.730 [2024-05-15 17:12:46.451064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.451407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.451420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.730 [2024-05-15 17:12:46.451430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.730 [2024-05-15 17:12:46.451675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.730 [2024-05-15 17:12:46.451898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.730 [2024-05-15 17:12:46.451906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.730 [2024-05-15 17:12:46.451913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.730 [2024-05-15 17:12:46.455461] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.730 [2024-05-15 17:12:46.464247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.730 [2024-05-15 17:12:46.464806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.465145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.465156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.730 [2024-05-15 17:12:46.465164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.730 [2024-05-15 17:12:46.465382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.730 [2024-05-15 17:12:46.465607] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.730 [2024-05-15 17:12:46.465616] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.730 [2024-05-15 17:12:46.465626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.730 [2024-05-15 17:12:46.469182] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.730 [2024-05-15 17:12:46.478173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.730 [2024-05-15 17:12:46.478844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.479187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.479200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.730 [2024-05-15 17:12:46.479209] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.730 [2024-05-15 17:12:46.479447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.730 [2024-05-15 17:12:46.479676] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.730 [2024-05-15 17:12:46.479686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.730 [2024-05-15 17:12:46.479693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.730 [2024-05-15 17:12:46.483238] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.730 [2024-05-15 17:12:46.492142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.730 [2024-05-15 17:12:46.492861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.493201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.730 [2024-05-15 17:12:46.493213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.730 [2024-05-15 17:12:46.493223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.730 [2024-05-15 17:12:46.493461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.730 [2024-05-15 17:12:46.493692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.730 [2024-05-15 17:12:46.493701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.730 [2024-05-15 17:12:46.493709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.730 [2024-05-15 17:12:46.497252] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.731 [2024-05-15 17:12:46.506017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.731 [2024-05-15 17:12:46.506663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-05-15 17:12:46.507063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-05-15 17:12:46.507075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.731 [2024-05-15 17:12:46.507084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.731 [2024-05-15 17:12:46.507323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.731 [2024-05-15 17:12:46.507553] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.731 [2024-05-15 17:12:46.507562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.731 [2024-05-15 17:12:46.507570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.731 [2024-05-15 17:12:46.511111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.731 [2024-05-15 17:12:46.519888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.731 [2024-05-15 17:12:46.520477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-05-15 17:12:46.520797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-05-15 17:12:46.520808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.731 [2024-05-15 17:12:46.520816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.731 [2024-05-15 17:12:46.521035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.731 [2024-05-15 17:12:46.521254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.731 [2024-05-15 17:12:46.521262] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.731 [2024-05-15 17:12:46.521269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.731 [2024-05-15 17:12:46.524809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.731 [2024-05-15 17:12:46.533785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.731 [2024-05-15 17:12:46.534327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-05-15 17:12:46.534554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-05-15 17:12:46.534565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.731 [2024-05-15 17:12:46.534572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.731 [2024-05-15 17:12:46.534791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.731 [2024-05-15 17:12:46.535010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.731 [2024-05-15 17:12:46.535017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.731 [2024-05-15 17:12:46.535024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.731 [2024-05-15 17:12:46.538565] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.731 [2024-05-15 17:12:46.547740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.731 [2024-05-15 17:12:46.548405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-05-15 17:12:46.548626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.731 [2024-05-15 17:12:46.548641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.731 [2024-05-15 17:12:46.548650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.731 [2024-05-15 17:12:46.548889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.731 [2024-05-15 17:12:46.549111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.731 [2024-05-15 17:12:46.549120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.731 [2024-05-15 17:12:46.549127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.731 [2024-05-15 17:12:46.552712] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.731 [2024-05-15 17:12:46.561724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.994 [2024-05-15 17:12:46.562273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.562614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.562629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.994 [2024-05-15 17:12:46.562638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.994 [2024-05-15 17:12:46.562876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.994 [2024-05-15 17:12:46.563099] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.994 [2024-05-15 17:12:46.563107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.994 [2024-05-15 17:12:46.563115] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.994 [2024-05-15 17:12:46.566666] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.994 [2024-05-15 17:12:46.575657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.994 [2024-05-15 17:12:46.576216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.576532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.576542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.994 [2024-05-15 17:12:46.576556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.994 [2024-05-15 17:12:46.576775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.994 [2024-05-15 17:12:46.576993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.994 [2024-05-15 17:12:46.577002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.994 [2024-05-15 17:12:46.577008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.994 [2024-05-15 17:12:46.580543] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.994 [2024-05-15 17:12:46.589523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.994 [2024-05-15 17:12:46.590108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.590425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.590434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.994 [2024-05-15 17:12:46.590441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.994 [2024-05-15 17:12:46.590664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.994 [2024-05-15 17:12:46.590883] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.994 [2024-05-15 17:12:46.590891] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.994 [2024-05-15 17:12:46.590897] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.994 [2024-05-15 17:12:46.594432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.994 [2024-05-15 17:12:46.603409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.994 [2024-05-15 17:12:46.603951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.604352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.604362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.994 [2024-05-15 17:12:46.604369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.994 [2024-05-15 17:12:46.604592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.994 [2024-05-15 17:12:46.604811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.994 [2024-05-15 17:12:46.604819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.994 [2024-05-15 17:12:46.604826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.994 [2024-05-15 17:12:46.608363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.994 [2024-05-15 17:12:46.617375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.994 [2024-05-15 17:12:46.618040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.618317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.618330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.994 [2024-05-15 17:12:46.618339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.994 [2024-05-15 17:12:46.618583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.994 [2024-05-15 17:12:46.618807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.994 [2024-05-15 17:12:46.618815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.994 [2024-05-15 17:12:46.618823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.994 [2024-05-15 17:12:46.622363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.994 [2024-05-15 17:12:46.631338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.994 [2024-05-15 17:12:46.631902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.632170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.632182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.994 [2024-05-15 17:12:46.632189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.994 [2024-05-15 17:12:46.632409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.994 [2024-05-15 17:12:46.632633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.994 [2024-05-15 17:12:46.632641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.994 [2024-05-15 17:12:46.632647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.994 [2024-05-15 17:12:46.636184] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.994 [2024-05-15 17:12:46.645159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.994 [2024-05-15 17:12:46.645701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.646026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.646035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.994 [2024-05-15 17:12:46.646047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.994 [2024-05-15 17:12:46.646265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.994 [2024-05-15 17:12:46.646484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.994 [2024-05-15 17:12:46.646492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.994 [2024-05-15 17:12:46.646499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.994 [2024-05-15 17:12:46.650042] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.994 [2024-05-15 17:12:46.659017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.994 [2024-05-15 17:12:46.659557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.659952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.994 [2024-05-15 17:12:46.659961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.994 [2024-05-15 17:12:46.659968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.994 [2024-05-15 17:12:46.660187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.994 [2024-05-15 17:12:46.660405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.994 [2024-05-15 17:12:46.660413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.994 [2024-05-15 17:12:46.660419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.995 [2024-05-15 17:12:46.663958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.995 [2024-05-15 17:12:46.672945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.995 [2024-05-15 17:12:46.673638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.673994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.674007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.995 [2024-05-15 17:12:46.674017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.995 [2024-05-15 17:12:46.674255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.995 [2024-05-15 17:12:46.674476] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.995 [2024-05-15 17:12:46.674485] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.995 [2024-05-15 17:12:46.674492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.995 [2024-05-15 17:12:46.678045] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.995 [2024-05-15 17:12:46.686858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.995 [2024-05-15 17:12:46.687541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.687888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.687901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.995 [2024-05-15 17:12:46.687911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.995 [2024-05-15 17:12:46.688153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.995 [2024-05-15 17:12:46.688376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.995 [2024-05-15 17:12:46.688385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.995 [2024-05-15 17:12:46.688392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.995 [2024-05-15 17:12:46.691939] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.995 [2024-05-15 17:12:46.700707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.995 [2024-05-15 17:12:46.701367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.701712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.701726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.995 [2024-05-15 17:12:46.701735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.995 [2024-05-15 17:12:46.701973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.995 [2024-05-15 17:12:46.702195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.995 [2024-05-15 17:12:46.702204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.995 [2024-05-15 17:12:46.702212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.995 [2024-05-15 17:12:46.705756] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.995 [2024-05-15 17:12:46.714525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.995 [2024-05-15 17:12:46.715061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.715379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.715388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.995 [2024-05-15 17:12:46.715396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.995 [2024-05-15 17:12:46.715619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.995 [2024-05-15 17:12:46.715838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.995 [2024-05-15 17:12:46.715845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.995 [2024-05-15 17:12:46.715852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.995 [2024-05-15 17:12:46.719393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.995 [2024-05-15 17:12:46.728368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.995 [2024-05-15 17:12:46.728914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.729182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.729194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.995 [2024-05-15 17:12:46.729201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.995 [2024-05-15 17:12:46.729424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.995 [2024-05-15 17:12:46.729648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.995 [2024-05-15 17:12:46.729657] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.995 [2024-05-15 17:12:46.729663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.995 [2024-05-15 17:12:46.733200] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.995 [2024-05-15 17:12:46.742176] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.995 [2024-05-15 17:12:46.742726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.743057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.743067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.995 [2024-05-15 17:12:46.743074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.995 [2024-05-15 17:12:46.743293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.995 [2024-05-15 17:12:46.743511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.995 [2024-05-15 17:12:46.743518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.995 [2024-05-15 17:12:46.743525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.995 [2024-05-15 17:12:46.747064] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.995 [2024-05-15 17:12:46.756031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.995 [2024-05-15 17:12:46.756748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.757094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.757107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.995 [2024-05-15 17:12:46.757116] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.995 [2024-05-15 17:12:46.757355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.995 [2024-05-15 17:12:46.757585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.995 [2024-05-15 17:12:46.757594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.995 [2024-05-15 17:12:46.757601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.995 [2024-05-15 17:12:46.761143] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.995 [2024-05-15 17:12:46.769949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.995 [2024-05-15 17:12:46.770606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.770946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.770959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.995 [2024-05-15 17:12:46.770968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.995 [2024-05-15 17:12:46.771206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.995 [2024-05-15 17:12:46.771432] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.995 [2024-05-15 17:12:46.771440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.995 [2024-05-15 17:12:46.771447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.995 [2024-05-15 17:12:46.774999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.995 [2024-05-15 17:12:46.783759] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.995 [2024-05-15 17:12:46.784366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.784620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.784634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.995 [2024-05-15 17:12:46.784644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.995 [2024-05-15 17:12:46.784882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.995 [2024-05-15 17:12:46.785105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.995 [2024-05-15 17:12:46.785113] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.995 [2024-05-15 17:12:46.785121] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.995 [2024-05-15 17:12:46.788665] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.995 [2024-05-15 17:12:46.797631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.995 [2024-05-15 17:12:46.798292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.798690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.995 [2024-05-15 17:12:46.798703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.995 [2024-05-15 17:12:46.798713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.995 [2024-05-15 17:12:46.798951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.996 [2024-05-15 17:12:46.799172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.996 [2024-05-15 17:12:46.799180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.996 [2024-05-15 17:12:46.799187] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.996 [2024-05-15 17:12:46.802732] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.996 [2024-05-15 17:12:46.811497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.996 [2024-05-15 17:12:46.812179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.996 [2024-05-15 17:12:46.812520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.996 [2024-05-15 17:12:46.812532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:07.996 [2024-05-15 17:12:46.812542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:07.996 [2024-05-15 17:12:46.812790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:07.996 [2024-05-15 17:12:46.813012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.996 [2024-05-15 17:12:46.813024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.996 [2024-05-15 17:12:46.813032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.996 [2024-05-15 17:12:46.816581] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.996 [2024-05-15 17:12:46.825353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.259 [2024-05-15 17:12:46.826015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.826271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.826290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.259 [2024-05-15 17:12:46.826300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.259 [2024-05-15 17:12:46.826538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.259 [2024-05-15 17:12:46.826771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.259 [2024-05-15 17:12:46.826780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.259 [2024-05-15 17:12:46.826787] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.259 [2024-05-15 17:12:46.830329] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.259 [2024-05-15 17:12:46.839312] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.259 [2024-05-15 17:12:46.839977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.840317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.840329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.259 [2024-05-15 17:12:46.840339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.259 [2024-05-15 17:12:46.840586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.259 [2024-05-15 17:12:46.840809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.259 [2024-05-15 17:12:46.840817] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.259 [2024-05-15 17:12:46.840824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.259 [2024-05-15 17:12:46.844363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.259 [2024-05-15 17:12:46.853129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.259 [2024-05-15 17:12:46.853801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.854144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.854157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.259 [2024-05-15 17:12:46.854166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.259 [2024-05-15 17:12:46.854404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.259 [2024-05-15 17:12:46.854635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.259 [2024-05-15 17:12:46.854644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.259 [2024-05-15 17:12:46.854656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.259 [2024-05-15 17:12:46.858198] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.259 [2024-05-15 17:12:46.866970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.259 [2024-05-15 17:12:46.867562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.867798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.867808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.259 [2024-05-15 17:12:46.867815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.259 [2024-05-15 17:12:46.868034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.259 [2024-05-15 17:12:46.868253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.259 [2024-05-15 17:12:46.868261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.259 [2024-05-15 17:12:46.868268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.259 [2024-05-15 17:12:46.871809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.259 [2024-05-15 17:12:46.880775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.259 [2024-05-15 17:12:46.881444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.881801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.881815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.259 [2024-05-15 17:12:46.881824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.259 [2024-05-15 17:12:46.882062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.259 [2024-05-15 17:12:46.882284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.259 [2024-05-15 17:12:46.882292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.259 [2024-05-15 17:12:46.882299] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.259 [2024-05-15 17:12:46.885846] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.259 [2024-05-15 17:12:46.894613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.259 [2024-05-15 17:12:46.895275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.895618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.895631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.259 [2024-05-15 17:12:46.895640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.259 [2024-05-15 17:12:46.895879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.259 [2024-05-15 17:12:46.896100] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.259 [2024-05-15 17:12:46.896108] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.259 [2024-05-15 17:12:46.896116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.259 [2024-05-15 17:12:46.899658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.259 [2024-05-15 17:12:46.908428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.259 [2024-05-15 17:12:46.909090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.909432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.909444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.259 [2024-05-15 17:12:46.909454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.259 [2024-05-15 17:12:46.909701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.259 [2024-05-15 17:12:46.909924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.259 [2024-05-15 17:12:46.909931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.259 [2024-05-15 17:12:46.909939] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.259 [2024-05-15 17:12:46.913480] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.259 [2024-05-15 17:12:46.922249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.259 [2024-05-15 17:12:46.922764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.923159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.259 [2024-05-15 17:12:46.923172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.259 [2024-05-15 17:12:46.923181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.259 [2024-05-15 17:12:46.923420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.259 [2024-05-15 17:12:46.923651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.259 [2024-05-15 17:12:46.923660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.259 [2024-05-15 17:12:46.923667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.260 [2024-05-15 17:12:46.927208] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.260 [2024-05-15 17:12:46.936182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.260 [2024-05-15 17:12:46.936841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:46.937181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:46.937193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.260 [2024-05-15 17:12:46.937202] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.260 [2024-05-15 17:12:46.937441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.260 [2024-05-15 17:12:46.937671] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.260 [2024-05-15 17:12:46.937680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.260 [2024-05-15 17:12:46.937687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.260 [2024-05-15 17:12:46.941229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.260 [2024-05-15 17:12:46.950004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.260 [2024-05-15 17:12:46.950775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:46.951049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:46.951062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.260 [2024-05-15 17:12:46.951071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.260 [2024-05-15 17:12:46.951309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.260 [2024-05-15 17:12:46.951532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.260 [2024-05-15 17:12:46.951539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.260 [2024-05-15 17:12:46.951555] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.260 [2024-05-15 17:12:46.955098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.260 [2024-05-15 17:12:46.963870] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.260 [2024-05-15 17:12:46.964426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:46.964744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:46.964755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.260 [2024-05-15 17:12:46.964762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.260 [2024-05-15 17:12:46.964981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.260 [2024-05-15 17:12:46.965200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.260 [2024-05-15 17:12:46.965213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.260 [2024-05-15 17:12:46.965220] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.260 [2024-05-15 17:12:46.968767] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.260 [2024-05-15 17:12:46.977759] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.260 [2024-05-15 17:12:46.978310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:46.978647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:46.978660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.260 [2024-05-15 17:12:46.978670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.260 [2024-05-15 17:12:46.978908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.260 [2024-05-15 17:12:46.979130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.260 [2024-05-15 17:12:46.979139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.260 [2024-05-15 17:12:46.979146] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.260 [2024-05-15 17:12:46.982691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.260 [2024-05-15 17:12:46.991666] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.260 [2024-05-15 17:12:46.992325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:46.992670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:46.992684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.260 [2024-05-15 17:12:46.992693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.260 [2024-05-15 17:12:46.992931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.260 [2024-05-15 17:12:46.993153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.260 [2024-05-15 17:12:46.993162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.260 [2024-05-15 17:12:46.993169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.260 [2024-05-15 17:12:46.996716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.260 [2024-05-15 17:12:47.005482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.260 [2024-05-15 17:12:47.006128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:47.006468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:47.006481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.260 [2024-05-15 17:12:47.006490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.260 [2024-05-15 17:12:47.006737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.260 [2024-05-15 17:12:47.006960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.260 [2024-05-15 17:12:47.006968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.260 [2024-05-15 17:12:47.006976] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.260 [2024-05-15 17:12:47.010515] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.260 [2024-05-15 17:12:47.019275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.260 [2024-05-15 17:12:47.019923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:47.020278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:47.020291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.260 [2024-05-15 17:12:47.020300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.260 [2024-05-15 17:12:47.020539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.260 [2024-05-15 17:12:47.020770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.260 [2024-05-15 17:12:47.020778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.260 [2024-05-15 17:12:47.020785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.260 [2024-05-15 17:12:47.024331] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.260 [2024-05-15 17:12:47.033096] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.260 [2024-05-15 17:12:47.033772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:47.034109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:47.034126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.260 [2024-05-15 17:12:47.034135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.260 [2024-05-15 17:12:47.034374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.260 [2024-05-15 17:12:47.034605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.260 [2024-05-15 17:12:47.034614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.260 [2024-05-15 17:12:47.034622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.260 [2024-05-15 17:12:47.038163] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.260 [2024-05-15 17:12:47.046927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.260 [2024-05-15 17:12:47.047611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:47.048008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:47.048020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.260 [2024-05-15 17:12:47.048030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.260 [2024-05-15 17:12:47.048268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.260 [2024-05-15 17:12:47.048490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.260 [2024-05-15 17:12:47.048498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.260 [2024-05-15 17:12:47.048505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.260 [2024-05-15 17:12:47.052054] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.260 [2024-05-15 17:12:47.060815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.260 [2024-05-15 17:12:47.061364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.260 [2024-05-15 17:12:47.061681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.261 [2024-05-15 17:12:47.061692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.261 [2024-05-15 17:12:47.061700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.261 [2024-05-15 17:12:47.061919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.261 [2024-05-15 17:12:47.062138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.261 [2024-05-15 17:12:47.062146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.261 [2024-05-15 17:12:47.062152] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.261 [2024-05-15 17:12:47.065691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.261 [2024-05-15 17:12:47.074667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.261 [2024-05-15 17:12:47.075317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.261 [2024-05-15 17:12:47.075631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.261 [2024-05-15 17:12:47.075644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.261 [2024-05-15 17:12:47.075657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.261 [2024-05-15 17:12:47.075896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.261 [2024-05-15 17:12:47.076118] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.261 [2024-05-15 17:12:47.076126] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.261 [2024-05-15 17:12:47.076133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.261 [2024-05-15 17:12:47.079678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.261 [2024-05-15 17:12:47.088656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.261 [2024-05-15 17:12:47.089270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.261 [2024-05-15 17:12:47.089608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.261 [2024-05-15 17:12:47.089622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.261 [2024-05-15 17:12:47.089631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.261 [2024-05-15 17:12:47.089870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.261 [2024-05-15 17:12:47.090092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.261 [2024-05-15 17:12:47.090101] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.261 [2024-05-15 17:12:47.090108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.524 [2024-05-15 17:12:47.093660] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.524 [2024-05-15 17:12:47.102638] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.524 [2024-05-15 17:12:47.103277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.103618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.103632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.524 [2024-05-15 17:12:47.103642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.524 [2024-05-15 17:12:47.103880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.524 [2024-05-15 17:12:47.104102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.524 [2024-05-15 17:12:47.104110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.524 [2024-05-15 17:12:47.104117] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.524 [2024-05-15 17:12:47.107663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.524 [2024-05-15 17:12:47.116428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.524 [2024-05-15 17:12:47.116989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.117312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.117321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.524 [2024-05-15 17:12:47.117329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.524 [2024-05-15 17:12:47.117557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.524 [2024-05-15 17:12:47.117777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.524 [2024-05-15 17:12:47.117786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.524 [2024-05-15 17:12:47.117793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.524 [2024-05-15 17:12:47.121331] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.524 [2024-05-15 17:12:47.130308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.524 [2024-05-15 17:12:47.130986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.131326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.131339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.524 [2024-05-15 17:12:47.131348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.524 [2024-05-15 17:12:47.131595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.524 [2024-05-15 17:12:47.131817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.524 [2024-05-15 17:12:47.131825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.524 [2024-05-15 17:12:47.131832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.524 [2024-05-15 17:12:47.135380] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.524 [2024-05-15 17:12:47.144141] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.524 [2024-05-15 17:12:47.144817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.145155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.145167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.524 [2024-05-15 17:12:47.145177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.524 [2024-05-15 17:12:47.145415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.524 [2024-05-15 17:12:47.145646] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.524 [2024-05-15 17:12:47.145655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.524 [2024-05-15 17:12:47.145662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.524 [2024-05-15 17:12:47.149208] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.524 [2024-05-15 17:12:47.157973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.524 [2024-05-15 17:12:47.158570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.158902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.158912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.524 [2024-05-15 17:12:47.158919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.524 [2024-05-15 17:12:47.159139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.524 [2024-05-15 17:12:47.159362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.524 [2024-05-15 17:12:47.159369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.524 [2024-05-15 17:12:47.159376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.524 [2024-05-15 17:12:47.162918] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.524 [2024-05-15 17:12:47.171896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.524 [2024-05-15 17:12:47.172552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.172943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.172955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.524 [2024-05-15 17:12:47.172965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.524 [2024-05-15 17:12:47.173203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.524 [2024-05-15 17:12:47.173425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.524 [2024-05-15 17:12:47.173433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.524 [2024-05-15 17:12:47.173441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.524 [2024-05-15 17:12:47.176991] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.524 [2024-05-15 17:12:47.185779] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.524 [2024-05-15 17:12:47.186491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.186835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.186849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.524 [2024-05-15 17:12:47.186858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.524 [2024-05-15 17:12:47.187097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.524 [2024-05-15 17:12:47.187318] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.524 [2024-05-15 17:12:47.187327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.524 [2024-05-15 17:12:47.187335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.524 [2024-05-15 17:12:47.190880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.524 [2024-05-15 17:12:47.199676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.524 [2024-05-15 17:12:47.200163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.200574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.200588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.524 [2024-05-15 17:12:47.200597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.524 [2024-05-15 17:12:47.200836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.524 [2024-05-15 17:12:47.201058] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.524 [2024-05-15 17:12:47.201070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.524 [2024-05-15 17:12:47.201078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.524 [2024-05-15 17:12:47.204627] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.524 [2024-05-15 17:12:47.213609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.524 [2024-05-15 17:12:47.214281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.524 [2024-05-15 17:12:47.214626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.214639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.525 [2024-05-15 17:12:47.214649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.525 [2024-05-15 17:12:47.214887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.525 [2024-05-15 17:12:47.215109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.525 [2024-05-15 17:12:47.215118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.525 [2024-05-15 17:12:47.215125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.525 [2024-05-15 17:12:47.218672] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.525 [2024-05-15 17:12:47.227439] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.525 [2024-05-15 17:12:47.228097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.228438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.228450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.525 [2024-05-15 17:12:47.228460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.525 [2024-05-15 17:12:47.228706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.525 [2024-05-15 17:12:47.228929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.525 [2024-05-15 17:12:47.228937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.525 [2024-05-15 17:12:47.228944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.525 [2024-05-15 17:12:47.232492] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.525 [2024-05-15 17:12:47.241258] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.525 [2024-05-15 17:12:47.241916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.242257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.242269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.525 [2024-05-15 17:12:47.242279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.525 [2024-05-15 17:12:47.242517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.525 [2024-05-15 17:12:47.242748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.525 [2024-05-15 17:12:47.242757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.525 [2024-05-15 17:12:47.242769] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.525 [2024-05-15 17:12:47.246311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.525 [2024-05-15 17:12:47.255069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.525 [2024-05-15 17:12:47.255785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.256127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.256139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.525 [2024-05-15 17:12:47.256148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.525 [2024-05-15 17:12:47.256386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.525 [2024-05-15 17:12:47.256616] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.525 [2024-05-15 17:12:47.256627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.525 [2024-05-15 17:12:47.256634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.525 [2024-05-15 17:12:47.260177] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.525 [2024-05-15 17:12:47.268954] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.525 [2024-05-15 17:12:47.269519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.269919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.269932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.525 [2024-05-15 17:12:47.269941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.525 [2024-05-15 17:12:47.270179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.525 [2024-05-15 17:12:47.270401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.525 [2024-05-15 17:12:47.270409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.525 [2024-05-15 17:12:47.270417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.525 [2024-05-15 17:12:47.273965] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.525 [2024-05-15 17:12:47.282935] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.525 [2024-05-15 17:12:47.283624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.283902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.283914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.525 [2024-05-15 17:12:47.283923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.525 [2024-05-15 17:12:47.284162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.525 [2024-05-15 17:12:47.284384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.525 [2024-05-15 17:12:47.284393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.525 [2024-05-15 17:12:47.284401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.525 [2024-05-15 17:12:47.287958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.525 [2024-05-15 17:12:47.296737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.525 [2024-05-15 17:12:47.297416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.297667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.297681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.525 [2024-05-15 17:12:47.297691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.525 [2024-05-15 17:12:47.297931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.525 [2024-05-15 17:12:47.298154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.525 [2024-05-15 17:12:47.298162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.525 [2024-05-15 17:12:47.298169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.525 [2024-05-15 17:12:47.301712] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.525 [2024-05-15 17:12:47.310688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.525 [2024-05-15 17:12:47.311238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.311578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.311591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.525 [2024-05-15 17:12:47.311600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.525 [2024-05-15 17:12:47.311839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.525 [2024-05-15 17:12:47.312061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.525 [2024-05-15 17:12:47.312069] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.525 [2024-05-15 17:12:47.312077] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.525 [2024-05-15 17:12:47.315623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.525 [2024-05-15 17:12:47.324480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.525 [2024-05-15 17:12:47.325080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.325394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.325403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.525 [2024-05-15 17:12:47.325410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.525 [2024-05-15 17:12:47.325635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.525 [2024-05-15 17:12:47.325854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.525 [2024-05-15 17:12:47.325862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.525 [2024-05-15 17:12:47.325868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.525 [2024-05-15 17:12:47.329410] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.525 [2024-05-15 17:12:47.338390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.525 [2024-05-15 17:12:47.339052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.339392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.525 [2024-05-15 17:12:47.339405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.525 [2024-05-15 17:12:47.339414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.525 [2024-05-15 17:12:47.339660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.525 [2024-05-15 17:12:47.339882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.525 [2024-05-15 17:12:47.339890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.526 [2024-05-15 17:12:47.339898] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.526 [2024-05-15 17:12:47.343436] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.526 [2024-05-15 17:12:47.352200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.526 [2024-05-15 17:12:47.352864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.526 [2024-05-15 17:12:47.353202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.526 [2024-05-15 17:12:47.353214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.526 [2024-05-15 17:12:47.353224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.526 [2024-05-15 17:12:47.353462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.526 [2024-05-15 17:12:47.353694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.526 [2024-05-15 17:12:47.353703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.526 [2024-05-15 17:12:47.353711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.790 [2024-05-15 17:12:47.357258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.790 [2024-05-15 17:12:47.366026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.790 [2024-05-15 17:12:47.366722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.367065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.367078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.790 [2024-05-15 17:12:47.367087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.790 [2024-05-15 17:12:47.367326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.790 [2024-05-15 17:12:47.367564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.790 [2024-05-15 17:12:47.367573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.790 [2024-05-15 17:12:47.367580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.790 [2024-05-15 17:12:47.371123] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.790 [2024-05-15 17:12:47.379888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.790 [2024-05-15 17:12:47.380566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.380964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.380977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.790 [2024-05-15 17:12:47.380986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.790 [2024-05-15 17:12:47.381224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.790 [2024-05-15 17:12:47.381446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.790 [2024-05-15 17:12:47.381454] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.790 [2024-05-15 17:12:47.381461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.790 [2024-05-15 17:12:47.385008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.790 [2024-05-15 17:12:47.393797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.790 [2024-05-15 17:12:47.394381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.394776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.394789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.790 [2024-05-15 17:12:47.394798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.790 [2024-05-15 17:12:47.395037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.790 [2024-05-15 17:12:47.395259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.790 [2024-05-15 17:12:47.395267] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.790 [2024-05-15 17:12:47.395274] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.790 [2024-05-15 17:12:47.398819] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.790 [2024-05-15 17:12:47.407585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.790 [2024-05-15 17:12:47.408265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.408501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.408513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.790 [2024-05-15 17:12:47.408522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.790 [2024-05-15 17:12:47.408770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.790 [2024-05-15 17:12:47.408993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.790 [2024-05-15 17:12:47.409001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.790 [2024-05-15 17:12:47.409008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.790 [2024-05-15 17:12:47.412553] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.790 [2024-05-15 17:12:47.421524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.790 [2024-05-15 17:12:47.422201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.422537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.422562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.790 [2024-05-15 17:12:47.422572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.790 [2024-05-15 17:12:47.422811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.790 [2024-05-15 17:12:47.423033] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.790 [2024-05-15 17:12:47.423041] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.790 [2024-05-15 17:12:47.423048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.790 [2024-05-15 17:12:47.426593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.790 [2024-05-15 17:12:47.435355] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.790 [2024-05-15 17:12:47.436029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.436273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.436286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.790 [2024-05-15 17:12:47.436295] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.790 [2024-05-15 17:12:47.436533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.790 [2024-05-15 17:12:47.436765] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.790 [2024-05-15 17:12:47.436774] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.790 [2024-05-15 17:12:47.436781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.790 [2024-05-15 17:12:47.440327] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.790 [2024-05-15 17:12:47.449300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.790 [2024-05-15 17:12:47.449965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.450215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.790 [2024-05-15 17:12:47.450227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.790 [2024-05-15 17:12:47.450237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.791 [2024-05-15 17:12:47.450474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.791 [2024-05-15 17:12:47.450705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.791 [2024-05-15 17:12:47.450714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.791 [2024-05-15 17:12:47.450722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.791 [2024-05-15 17:12:47.454266] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.791 [2024-05-15 17:12:47.463259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.791 [2024-05-15 17:12:47.463825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.464149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.464159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.791 [2024-05-15 17:12:47.464171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.791 [2024-05-15 17:12:47.464391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.791 [2024-05-15 17:12:47.464614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.791 [2024-05-15 17:12:47.464623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.791 [2024-05-15 17:12:47.464630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.791 [2024-05-15 17:12:47.468183] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.791 [2024-05-15 17:12:47.477171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.791 [2024-05-15 17:12:47.477839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.478180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.478192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.791 [2024-05-15 17:12:47.478201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.791 [2024-05-15 17:12:47.478440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.791 [2024-05-15 17:12:47.478668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.791 [2024-05-15 17:12:47.478677] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.791 [2024-05-15 17:12:47.478684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.791 [2024-05-15 17:12:47.482226] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.791 [2024-05-15 17:12:47.490986] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.791 [2024-05-15 17:12:47.491574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.491932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.491943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.791 [2024-05-15 17:12:47.491950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.791 [2024-05-15 17:12:47.492170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.791 [2024-05-15 17:12:47.492388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.791 [2024-05-15 17:12:47.492396] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.791 [2024-05-15 17:12:47.492402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.791 [2024-05-15 17:12:47.495941] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.791 [2024-05-15 17:12:47.504916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.791 [2024-05-15 17:12:47.505500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.505855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.505865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.791 [2024-05-15 17:12:47.505873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.791 [2024-05-15 17:12:47.506096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.791 [2024-05-15 17:12:47.506315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.791 [2024-05-15 17:12:47.506322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.791 [2024-05-15 17:12:47.506329] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.791 [2024-05-15 17:12:47.509869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.791 [2024-05-15 17:12:47.518847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.791 [2024-05-15 17:12:47.519494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.519834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.519848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.791 [2024-05-15 17:12:47.519857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.791 [2024-05-15 17:12:47.520095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.791 [2024-05-15 17:12:47.520317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.791 [2024-05-15 17:12:47.520325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.791 [2024-05-15 17:12:47.520333] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.791 [2024-05-15 17:12:47.523888] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.791 [2024-05-15 17:12:47.532665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.791 [2024-05-15 17:12:47.533322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.533675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.533689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.791 [2024-05-15 17:12:47.533699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.791 [2024-05-15 17:12:47.533938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.791 [2024-05-15 17:12:47.534160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.791 [2024-05-15 17:12:47.534168] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.791 [2024-05-15 17:12:47.534175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.791 [2024-05-15 17:12:47.537725] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.791 [2024-05-15 17:12:47.546577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.791 [2024-05-15 17:12:47.547238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.547622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.547636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.791 [2024-05-15 17:12:47.547646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.791 [2024-05-15 17:12:47.547884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.791 [2024-05-15 17:12:47.548113] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.791 [2024-05-15 17:12:47.548122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.791 [2024-05-15 17:12:47.548130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.791 [2024-05-15 17:12:47.551677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.791 [2024-05-15 17:12:47.560438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.791 [2024-05-15 17:12:47.561031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.561368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.561380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.791 [2024-05-15 17:12:47.561390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.791 [2024-05-15 17:12:47.561636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.791 [2024-05-15 17:12:47.561859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.791 [2024-05-15 17:12:47.561868] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.791 [2024-05-15 17:12:47.561875] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.791 [2024-05-15 17:12:47.565416] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.791 [2024-05-15 17:12:47.574405] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.791 [2024-05-15 17:12:47.575048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.575387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.575400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.791 [2024-05-15 17:12:47.575409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.791 [2024-05-15 17:12:47.575656] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.791 [2024-05-15 17:12:47.575878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.791 [2024-05-15 17:12:47.575887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.791 [2024-05-15 17:12:47.575894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.791 [2024-05-15 17:12:47.579436] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.791 [2024-05-15 17:12:47.588205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.791 [2024-05-15 17:12:47.588872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.791 [2024-05-15 17:12:47.589208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.792 [2024-05-15 17:12:47.589221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.792 [2024-05-15 17:12:47.589230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.792 [2024-05-15 17:12:47.589468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.792 [2024-05-15 17:12:47.589697] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.792 [2024-05-15 17:12:47.589711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.792 [2024-05-15 17:12:47.589718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.792 [2024-05-15 17:12:47.593262] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.792 [2024-05-15 17:12:47.602060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.792 [2024-05-15 17:12:47.602661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.792 [2024-05-15 17:12:47.603027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.792 [2024-05-15 17:12:47.603039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.792 [2024-05-15 17:12:47.603048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.792 [2024-05-15 17:12:47.603286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.792 [2024-05-15 17:12:47.603508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.792 [2024-05-15 17:12:47.603516] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.792 [2024-05-15 17:12:47.603523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.792 [2024-05-15 17:12:47.607077] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.792 [2024-05-15 17:12:47.615847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.792 [2024-05-15 17:12:47.616513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.792 [2024-05-15 17:12:47.616956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.792 [2024-05-15 17:12:47.616970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:08.792 [2024-05-15 17:12:47.616979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:08.792 [2024-05-15 17:12:47.617217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:08.792 [2024-05-15 17:12:47.617440] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.792 [2024-05-15 17:12:47.617448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.792 [2024-05-15 17:12:47.617455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.792 [2024-05-15 17:12:47.621003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.054 [2024-05-15 17:12:47.629773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.054 [2024-05-15 17:12:47.630429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.630784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.630799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.054 [2024-05-15 17:12:47.630808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.054 [2024-05-15 17:12:47.631047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.054 [2024-05-15 17:12:47.631269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.054 [2024-05-15 17:12:47.631278] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.054 [2024-05-15 17:12:47.631289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.054 [2024-05-15 17:12:47.634836] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.054 [2024-05-15 17:12:47.643595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.054 [2024-05-15 17:12:47.644135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.644501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.644513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.054 [2024-05-15 17:12:47.644522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.054 [2024-05-15 17:12:47.644771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.054 [2024-05-15 17:12:47.644995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.054 [2024-05-15 17:12:47.645003] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.054 [2024-05-15 17:12:47.645010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.054 [2024-05-15 17:12:47.648558] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.054 [2024-05-15 17:12:47.657522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.054 [2024-05-15 17:12:47.658142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.658483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.658496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.054 [2024-05-15 17:12:47.658505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.054 [2024-05-15 17:12:47.658752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.054 [2024-05-15 17:12:47.658975] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.054 [2024-05-15 17:12:47.658983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.054 [2024-05-15 17:12:47.658990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.054 [2024-05-15 17:12:47.662532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.054 [2024-05-15 17:12:47.671306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.054 [2024-05-15 17:12:47.671947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.672287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.672300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.054 [2024-05-15 17:12:47.672309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.054 [2024-05-15 17:12:47.672555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.054 [2024-05-15 17:12:47.672778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.054 [2024-05-15 17:12:47.672787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.054 [2024-05-15 17:12:47.672794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.054 [2024-05-15 17:12:47.676340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.054 [2024-05-15 17:12:47.685102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.054 [2024-05-15 17:12:47.685655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.686050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.686062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.054 [2024-05-15 17:12:47.686071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.054 [2024-05-15 17:12:47.686310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.054 [2024-05-15 17:12:47.686532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.054 [2024-05-15 17:12:47.686540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.054 [2024-05-15 17:12:47.686556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.054 [2024-05-15 17:12:47.690100] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.054 [2024-05-15 17:12:47.699078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.054 [2024-05-15 17:12:47.699645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.699991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.700004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.054 [2024-05-15 17:12:47.700013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.054 [2024-05-15 17:12:47.700251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.054 [2024-05-15 17:12:47.700473] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.054 [2024-05-15 17:12:47.700489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.054 [2024-05-15 17:12:47.700496] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.054 [2024-05-15 17:12:47.704044] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.054 [2024-05-15 17:12:47.713020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.054 [2024-05-15 17:12:47.713653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.714054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.054 [2024-05-15 17:12:47.714066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.054 [2024-05-15 17:12:47.714076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.054 [2024-05-15 17:12:47.714314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.054 [2024-05-15 17:12:47.714536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.054 [2024-05-15 17:12:47.714552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.054 [2024-05-15 17:12:47.714560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.054 [2024-05-15 17:12:47.718101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.055 [2024-05-15 17:12:47.726871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.055 [2024-05-15 17:12:47.727569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.727911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.727923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.055 [2024-05-15 17:12:47.727933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.055 [2024-05-15 17:12:47.728171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.055 [2024-05-15 17:12:47.728393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.055 [2024-05-15 17:12:47.728401] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.055 [2024-05-15 17:12:47.728408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.055 [2024-05-15 17:12:47.731960] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.055 [2024-05-15 17:12:47.740742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.055 [2024-05-15 17:12:47.741362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.741707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.741722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.055 [2024-05-15 17:12:47.741732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.055 [2024-05-15 17:12:47.741970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.055 [2024-05-15 17:12:47.742193] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.055 [2024-05-15 17:12:47.742236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.055 [2024-05-15 17:12:47.742244] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.055 [2024-05-15 17:12:47.745791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.055 [2024-05-15 17:12:47.754571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.055 [2024-05-15 17:12:47.755210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.755613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.755627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.055 [2024-05-15 17:12:47.755637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.055 [2024-05-15 17:12:47.755875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.055 [2024-05-15 17:12:47.756096] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.055 [2024-05-15 17:12:47.756104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.055 [2024-05-15 17:12:47.756111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.055 [2024-05-15 17:12:47.759658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.055 [2024-05-15 17:12:47.768435] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.055 [2024-05-15 17:12:47.769148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.769518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.769530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.055 [2024-05-15 17:12:47.769540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.055 [2024-05-15 17:12:47.769784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.055 [2024-05-15 17:12:47.770007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.055 [2024-05-15 17:12:47.770016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.055 [2024-05-15 17:12:47.770023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.055 [2024-05-15 17:12:47.773572] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.055 [2024-05-15 17:12:47.782336] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.055 [2024-05-15 17:12:47.782999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.783391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.783404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.055 [2024-05-15 17:12:47.783413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.055 [2024-05-15 17:12:47.783658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.055 [2024-05-15 17:12:47.783881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.055 [2024-05-15 17:12:47.783889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.055 [2024-05-15 17:12:47.783899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.055 [2024-05-15 17:12:47.787443] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.055 [2024-05-15 17:12:47.796204] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.055 [2024-05-15 17:12:47.796787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.797111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.797121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.055 [2024-05-15 17:12:47.797129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.055 [2024-05-15 17:12:47.797347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.055 [2024-05-15 17:12:47.797570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.055 [2024-05-15 17:12:47.797578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.055 [2024-05-15 17:12:47.797584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.055 [2024-05-15 17:12:47.801122] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.055 [2024-05-15 17:12:47.810127] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.055 [2024-05-15 17:12:47.810864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.811206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.811223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.055 [2024-05-15 17:12:47.811233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.055 [2024-05-15 17:12:47.811471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.055 [2024-05-15 17:12:47.811700] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.055 [2024-05-15 17:12:47.811710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.055 [2024-05-15 17:12:47.811718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.055 [2024-05-15 17:12:47.815260] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.055 [2024-05-15 17:12:47.824025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.055 [2024-05-15 17:12:47.824617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.824970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.824980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.055 [2024-05-15 17:12:47.824988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.055 [2024-05-15 17:12:47.825211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.055 [2024-05-15 17:12:47.825431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.055 [2024-05-15 17:12:47.825439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.055 [2024-05-15 17:12:47.825447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.055 [2024-05-15 17:12:47.828994] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.055 [2024-05-15 17:12:47.837989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.055 [2024-05-15 17:12:47.838632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.838989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.839002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.055 [2024-05-15 17:12:47.839012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.055 [2024-05-15 17:12:47.839250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.055 [2024-05-15 17:12:47.839472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.055 [2024-05-15 17:12:47.839481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.055 [2024-05-15 17:12:47.839487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.055 [2024-05-15 17:12:47.843050] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.055 [2024-05-15 17:12:47.851821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.055 [2024-05-15 17:12:47.852502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.852757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.055 [2024-05-15 17:12:47.852773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.055 [2024-05-15 17:12:47.852786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.055 [2024-05-15 17:12:47.853025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.055 [2024-05-15 17:12:47.853247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.056 [2024-05-15 17:12:47.853255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.056 [2024-05-15 17:12:47.853262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.056 [2024-05-15 17:12:47.856815] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.056 [2024-05-15 17:12:47.865799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.056 [2024-05-15 17:12:47.866473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.056 [2024-05-15 17:12:47.866811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.056 [2024-05-15 17:12:47.866826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.056 [2024-05-15 17:12:47.866836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.056 [2024-05-15 17:12:47.867074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.056 [2024-05-15 17:12:47.867296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.056 [2024-05-15 17:12:47.867304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.056 [2024-05-15 17:12:47.867311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.056 [2024-05-15 17:12:47.870871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.056 [2024-05-15 17:12:47.879645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.056 [2024-05-15 17:12:47.880309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.056 [2024-05-15 17:12:47.880658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.056 [2024-05-15 17:12:47.880673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.056 [2024-05-15 17:12:47.880683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.056 [2024-05-15 17:12:47.880922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.056 [2024-05-15 17:12:47.881143] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.056 [2024-05-15 17:12:47.881152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.056 [2024-05-15 17:12:47.881159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.056 [2024-05-15 17:12:47.884701] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.318 [2024-05-15 17:12:47.893464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.318 [2024-05-15 17:12:47.894015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.894375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.894384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.318 [2024-05-15 17:12:47.894392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.318 [2024-05-15 17:12:47.894621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.318 [2024-05-15 17:12:47.894840] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.318 [2024-05-15 17:12:47.894847] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.318 [2024-05-15 17:12:47.894854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.318 [2024-05-15 17:12:47.898393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.318 [2024-05-15 17:12:47.907366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.318 [2024-05-15 17:12:47.907889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.908242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.908255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.318 [2024-05-15 17:12:47.908264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.318 [2024-05-15 17:12:47.908503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.318 [2024-05-15 17:12:47.908733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.318 [2024-05-15 17:12:47.908743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.318 [2024-05-15 17:12:47.908750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.318 [2024-05-15 17:12:47.912294] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.318 [2024-05-15 17:12:47.921280] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.318 [2024-05-15 17:12:47.921944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.922282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.922295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.318 [2024-05-15 17:12:47.922304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.318 [2024-05-15 17:12:47.922543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.318 [2024-05-15 17:12:47.922777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.318 [2024-05-15 17:12:47.922786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.318 [2024-05-15 17:12:47.922793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.318 [2024-05-15 17:12:47.926338] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.318 [2024-05-15 17:12:47.935095] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.318 [2024-05-15 17:12:47.935847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.936184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.936197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.318 [2024-05-15 17:12:47.936207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.318 [2024-05-15 17:12:47.936445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.318 [2024-05-15 17:12:47.936680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.318 [2024-05-15 17:12:47.936689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.318 [2024-05-15 17:12:47.936696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.318 [2024-05-15 17:12:47.940242] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.318 [2024-05-15 17:12:47.949013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.318 [2024-05-15 17:12:47.949654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.950005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.950019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.318 [2024-05-15 17:12:47.950028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.318 [2024-05-15 17:12:47.950267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.318 [2024-05-15 17:12:47.950489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.318 [2024-05-15 17:12:47.950497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.318 [2024-05-15 17:12:47.950505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.318 [2024-05-15 17:12:47.954054] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.318 [2024-05-15 17:12:47.962830] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.318 [2024-05-15 17:12:47.963264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.963580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.963591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.318 [2024-05-15 17:12:47.963599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.318 [2024-05-15 17:12:47.963819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.318 [2024-05-15 17:12:47.964037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.318 [2024-05-15 17:12:47.964045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.318 [2024-05-15 17:12:47.964052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.318 [2024-05-15 17:12:47.967602] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.318 [2024-05-15 17:12:47.976788] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.318 [2024-05-15 17:12:47.977328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.977645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.977655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.318 [2024-05-15 17:12:47.977663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.318 [2024-05-15 17:12:47.977881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.318 [2024-05-15 17:12:47.978099] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.318 [2024-05-15 17:12:47.978111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.318 [2024-05-15 17:12:47.978118] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.318 [2024-05-15 17:12:47.981655] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.318 [2024-05-15 17:12:47.990626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.318 [2024-05-15 17:12:47.991206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.991519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:47.991529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.318 [2024-05-15 17:12:47.991536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.318 [2024-05-15 17:12:47.991759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.318 [2024-05-15 17:12:47.991978] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.318 [2024-05-15 17:12:47.991987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.318 [2024-05-15 17:12:47.991993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.318 [2024-05-15 17:12:47.995534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.318 [2024-05-15 17:12:48.004508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.318 [2024-05-15 17:12:48.005054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:48.005367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.318 [2024-05-15 17:12:48.005376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.318 [2024-05-15 17:12:48.005384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.319 [2024-05-15 17:12:48.005607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.319 [2024-05-15 17:12:48.005825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.319 [2024-05-15 17:12:48.005833] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.319 [2024-05-15 17:12:48.005839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.319 [2024-05-15 17:12:48.009377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.319 [2024-05-15 17:12:48.018379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.319 [2024-05-15 17:12:48.018931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.019241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.019251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.319 [2024-05-15 17:12:48.019258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.319 [2024-05-15 17:12:48.019477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.319 [2024-05-15 17:12:48.019700] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.319 [2024-05-15 17:12:48.019709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.319 [2024-05-15 17:12:48.019719] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.319 [2024-05-15 17:12:48.023261] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.319 [2024-05-15 17:12:48.032235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.319 [2024-05-15 17:12:48.032775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.033091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.033101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.319 [2024-05-15 17:12:48.033108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.319 [2024-05-15 17:12:48.033327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.319 [2024-05-15 17:12:48.033548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.319 [2024-05-15 17:12:48.033556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.319 [2024-05-15 17:12:48.033563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.319 [2024-05-15 17:12:48.037103] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.319 [2024-05-15 17:12:48.046076] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.319 [2024-05-15 17:12:48.046659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.047013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.047022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.319 [2024-05-15 17:12:48.047030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.319 [2024-05-15 17:12:48.047248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.319 [2024-05-15 17:12:48.047467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.319 [2024-05-15 17:12:48.047474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.319 [2024-05-15 17:12:48.047481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.319 [2024-05-15 17:12:48.051023] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.319 [2024-05-15 17:12:48.059995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.319 [2024-05-15 17:12:48.060646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.061044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.061056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.319 [2024-05-15 17:12:48.061066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.319 [2024-05-15 17:12:48.061304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.319 [2024-05-15 17:12:48.061526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.319 [2024-05-15 17:12:48.061535] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.319 [2024-05-15 17:12:48.061543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.319 [2024-05-15 17:12:48.065108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.319 [2024-05-15 17:12:48.073889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.319 [2024-05-15 17:12:48.074471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.074795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.074806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.319 [2024-05-15 17:12:48.074814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.319 [2024-05-15 17:12:48.075033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.319 [2024-05-15 17:12:48.075251] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.319 [2024-05-15 17:12:48.075259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.319 [2024-05-15 17:12:48.075266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.319 [2024-05-15 17:12:48.078807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.319 [2024-05-15 17:12:48.087787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.319 [2024-05-15 17:12:48.088459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.088835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.088850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.319 [2024-05-15 17:12:48.088860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.319 [2024-05-15 17:12:48.089098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.319 [2024-05-15 17:12:48.089320] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.319 [2024-05-15 17:12:48.089328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.319 [2024-05-15 17:12:48.089335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.319 [2024-05-15 17:12:48.092882] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.319 [2024-05-15 17:12:48.101654] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.319 [2024-05-15 17:12:48.102204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.102521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.102531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.319 [2024-05-15 17:12:48.102539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.319 [2024-05-15 17:12:48.102764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.319 [2024-05-15 17:12:48.102983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.319 [2024-05-15 17:12:48.102992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.319 [2024-05-15 17:12:48.102998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.319 [2024-05-15 17:12:48.106534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.319 [2024-05-15 17:12:48.115511] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.319 [2024-05-15 17:12:48.116156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.116586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.116600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.319 [2024-05-15 17:12:48.116610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.319 [2024-05-15 17:12:48.116848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.319 [2024-05-15 17:12:48.117071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.319 [2024-05-15 17:12:48.117079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.319 [2024-05-15 17:12:48.117086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.319 [2024-05-15 17:12:48.120634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.319 [2024-05-15 17:12:48.129407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.319 [2024-05-15 17:12:48.129975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.130365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.130375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.319 [2024-05-15 17:12:48.130383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.319 [2024-05-15 17:12:48.130606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.319 [2024-05-15 17:12:48.130826] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.319 [2024-05-15 17:12:48.130833] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.319 [2024-05-15 17:12:48.130840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.319 [2024-05-15 17:12:48.134378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.319 [2024-05-15 17:12:48.143353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.319 [2024-05-15 17:12:48.144010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.319 [2024-05-15 17:12:48.144351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.320 [2024-05-15 17:12:48.144364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.320 [2024-05-15 17:12:48.144373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.320 [2024-05-15 17:12:48.144619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.320 [2024-05-15 17:12:48.144842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.320 [2024-05-15 17:12:48.144850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.320 [2024-05-15 17:12:48.144857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.320 [2024-05-15 17:12:48.148401] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.583 [2024-05-15 17:12:48.157172] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.583 [2024-05-15 17:12:48.157723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.158063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.158073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.583 [2024-05-15 17:12:48.158082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.583 [2024-05-15 17:12:48.158301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.583 [2024-05-15 17:12:48.158519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.583 [2024-05-15 17:12:48.158528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.583 [2024-05-15 17:12:48.158535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.583 [2024-05-15 17:12:48.162080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.583 [2024-05-15 17:12:48.171055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.583 [2024-05-15 17:12:48.171627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.171956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.171966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.583 [2024-05-15 17:12:48.171973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.583 [2024-05-15 17:12:48.172192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.583 [2024-05-15 17:12:48.172410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.583 [2024-05-15 17:12:48.172418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.583 [2024-05-15 17:12:48.172424] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.583 [2024-05-15 17:12:48.175965] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.583 [2024-05-15 17:12:48.184933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.583 [2024-05-15 17:12:48.185519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.185830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.185840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.583 [2024-05-15 17:12:48.185847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.583 [2024-05-15 17:12:48.186066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.583 [2024-05-15 17:12:48.186283] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.583 [2024-05-15 17:12:48.186299] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.583 [2024-05-15 17:12:48.186306] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.583 [2024-05-15 17:12:48.189847] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.583 [2024-05-15 17:12:48.198827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.583 [2024-05-15 17:12:48.199363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.199688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.199707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.583 [2024-05-15 17:12:48.199714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.583 [2024-05-15 17:12:48.199932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.583 [2024-05-15 17:12:48.200150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.583 [2024-05-15 17:12:48.200159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.583 [2024-05-15 17:12:48.200166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.583 [2024-05-15 17:12:48.203706] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.583 [2024-05-15 17:12:48.212685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.583 [2024-05-15 17:12:48.213274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.213591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.213601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.583 [2024-05-15 17:12:48.213608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.583 [2024-05-15 17:12:48.213827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.583 [2024-05-15 17:12:48.214044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.583 [2024-05-15 17:12:48.214052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.583 [2024-05-15 17:12:48.214059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.583 [2024-05-15 17:12:48.217600] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.583 [2024-05-15 17:12:48.226606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.583 [2024-05-15 17:12:48.227262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.227602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.227617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.583 [2024-05-15 17:12:48.227626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.583 [2024-05-15 17:12:48.227865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.583 [2024-05-15 17:12:48.228087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.583 [2024-05-15 17:12:48.228096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.583 [2024-05-15 17:12:48.228104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.583 [2024-05-15 17:12:48.231651] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.583 [2024-05-15 17:12:48.240417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.583 [2024-05-15 17:12:48.240990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.241302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.241311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.583 [2024-05-15 17:12:48.241323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.583 [2024-05-15 17:12:48.241543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.583 [2024-05-15 17:12:48.241768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.583 [2024-05-15 17:12:48.241776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.583 [2024-05-15 17:12:48.241783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.583 [2024-05-15 17:12:48.245322] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.583 [2024-05-15 17:12:48.254298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.583 [2024-05-15 17:12:48.254859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.255193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.583 [2024-05-15 17:12:48.255203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.583 [2024-05-15 17:12:48.255211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.583 [2024-05-15 17:12:48.255429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.583 [2024-05-15 17:12:48.255653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.583 [2024-05-15 17:12:48.255661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.583 [2024-05-15 17:12:48.255668] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.583 [2024-05-15 17:12:48.259205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.583 [2024-05-15 17:12:48.268189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.583 [2024-05-15 17:12:48.268858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.269201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.269213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-05-15 17:12:48.269223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.584 [2024-05-15 17:12:48.269461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.584 [2024-05-15 17:12:48.269690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.584 [2024-05-15 17:12:48.269700] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.584 [2024-05-15 17:12:48.269708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.584 [2024-05-15 17:12:48.273259] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.584 [2024-05-15 17:12:48.282031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.584 [2024-05-15 17:12:48.282596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.282921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.282931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-05-15 17:12:48.282939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.584 [2024-05-15 17:12:48.283167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.584 [2024-05-15 17:12:48.283387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.584 [2024-05-15 17:12:48.283394] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.584 [2024-05-15 17:12:48.283401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.584 [2024-05-15 17:12:48.286945] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.584 [2024-05-15 17:12:48.295922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.584 [2024-05-15 17:12:48.296349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.296688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.296699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-05-15 17:12:48.296706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.584 [2024-05-15 17:12:48.296924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.584 [2024-05-15 17:12:48.297142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.584 [2024-05-15 17:12:48.297149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.584 [2024-05-15 17:12:48.297156] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.584 [2024-05-15 17:12:48.300697] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.584 [2024-05-15 17:12:48.309883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.584 [2024-05-15 17:12:48.310467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.310817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.310832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-05-15 17:12:48.310841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.584 [2024-05-15 17:12:48.311080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.584 [2024-05-15 17:12:48.311302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.584 [2024-05-15 17:12:48.311311] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.584 [2024-05-15 17:12:48.311318] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.584 [2024-05-15 17:12:48.314867] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.584 [2024-05-15 17:12:48.323849] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.584 [2024-05-15 17:12:48.324513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.324866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.324879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-05-15 17:12:48.324888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.584 [2024-05-15 17:12:48.325126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.584 [2024-05-15 17:12:48.325353] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.584 [2024-05-15 17:12:48.325362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.584 [2024-05-15 17:12:48.325369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.584 [2024-05-15 17:12:48.328917] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.584 [2024-05-15 17:12:48.337678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.584 [2024-05-15 17:12:48.338264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.338493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.338503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-05-15 17:12:48.338510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.584 [2024-05-15 17:12:48.338735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.584 [2024-05-15 17:12:48.338961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.584 [2024-05-15 17:12:48.338969] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.584 [2024-05-15 17:12:48.338975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.584 [2024-05-15 17:12:48.342513] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.584 [2024-05-15 17:12:48.351581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.584 [2024-05-15 17:12:48.352206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.352501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.352514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-05-15 17:12:48.352523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.584 [2024-05-15 17:12:48.352768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.584 [2024-05-15 17:12:48.352991] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.584 [2024-05-15 17:12:48.352999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.584 [2024-05-15 17:12:48.353007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.584 [2024-05-15 17:12:48.356554] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.584 [2024-05-15 17:12:48.365535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.584 [2024-05-15 17:12:48.366086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.366398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.366408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-05-15 17:12:48.366416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.584 [2024-05-15 17:12:48.366640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.584 [2024-05-15 17:12:48.366859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.584 [2024-05-15 17:12:48.366871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.584 [2024-05-15 17:12:48.366878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.584 [2024-05-15 17:12:48.370428] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.584 [2024-05-15 17:12:48.379407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.584 [2024-05-15 17:12:48.380099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.380437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.380450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-05-15 17:12:48.380459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.584 [2024-05-15 17:12:48.380706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.584 [2024-05-15 17:12:48.380928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.584 [2024-05-15 17:12:48.380937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.584 [2024-05-15 17:12:48.380945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.584 [2024-05-15 17:12:48.384487] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.584 [2024-05-15 17:12:48.393261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.584 [2024-05-15 17:12:48.393740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.394093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-05-15 17:12:48.394104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-05-15 17:12:48.394111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.584 [2024-05-15 17:12:48.394330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.584 [2024-05-15 17:12:48.394553] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.585 [2024-05-15 17:12:48.394561] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.585 [2024-05-15 17:12:48.394568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.585 [2024-05-15 17:12:48.398106] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.585 [2024-05-15 17:12:48.407083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.585 [2024-05-15 17:12:48.407520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-05-15 17:12:48.407829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-05-15 17:12:48.407840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.585 [2024-05-15 17:12:48.407847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.585 [2024-05-15 17:12:48.408067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.585 [2024-05-15 17:12:48.408286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.585 [2024-05-15 17:12:48.408293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.585 [2024-05-15 17:12:48.408304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.585 [2024-05-15 17:12:48.411850] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.847 [2024-05-15 17:12:48.421033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.847 [2024-05-15 17:12:48.421665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.847 [2024-05-15 17:12:48.422023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.847 [2024-05-15 17:12:48.422036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.847 [2024-05-15 17:12:48.422045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.847 [2024-05-15 17:12:48.422283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.847 [2024-05-15 17:12:48.422505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.847 [2024-05-15 17:12:48.422513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.847 [2024-05-15 17:12:48.422521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.847 [2024-05-15 17:12:48.426073] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1638088 Killed "${NVMF_APP[@]}" "$@" 00:28:09.847 17:12:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:09.847 17:12:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:09.847 17:12:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.847 17:12:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:09.847 17:12:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.847 [2024-05-15 17:12:48.434875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.847 [2024-05-15 17:12:48.435518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.847 [2024-05-15 17:12:48.435909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.847 [2024-05-15 17:12:48.435923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.847 [2024-05-15 17:12:48.435933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.847 [2024-05-15 17:12:48.436171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.847 [2024-05-15 17:12:48.436393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.847 [2024-05-15 17:12:48.436402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.847 [2024-05-15 17:12:48.436409] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.847 17:12:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1639757 00:28:09.847 17:12:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1639757 00:28:09.847 17:12:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:09.848 17:12:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1639757 ']' 00:28:09.848 17:12:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.848 17:12:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:09.848 17:12:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.848 17:12:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:09.848 17:12:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.848 [2024-05-15 17:12:48.439957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.848 [2024-05-15 17:12:48.448727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.848 [2024-05-15 17:12:48.449392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.449643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.449658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.848 [2024-05-15 17:12:48.449669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.848 [2024-05-15 17:12:48.449908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.848 [2024-05-15 17:12:48.450131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.848 [2024-05-15 17:12:48.450140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.848 [2024-05-15 17:12:48.450149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.848 [2024-05-15 17:12:48.453700] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.848 [2024-05-15 17:12:48.462680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.848 [2024-05-15 17:12:48.463349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.463706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.463721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.848 [2024-05-15 17:12:48.463730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.848 [2024-05-15 17:12:48.463969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.848 [2024-05-15 17:12:48.464191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.848 [2024-05-15 17:12:48.464200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.848 [2024-05-15 17:12:48.464207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.848 [2024-05-15 17:12:48.467764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.848 [2024-05-15 17:12:48.476581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.848 [2024-05-15 17:12:48.477211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.477558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.477571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.848 [2024-05-15 17:12:48.477581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.848 [2024-05-15 17:12:48.477819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.848 [2024-05-15 17:12:48.478042] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.848 [2024-05-15 17:12:48.478057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.848 [2024-05-15 17:12:48.478069] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.848 [2024-05-15 17:12:48.481628] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.848 [2024-05-15 17:12:48.486838] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:28:09.848 [2024-05-15 17:12:48.486881] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.848 [2024-05-15 17:12:48.490396] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.848 [2024-05-15 17:12:48.491076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.491333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.491352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.848 [2024-05-15 17:12:48.491361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.848 [2024-05-15 17:12:48.491607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.848 [2024-05-15 17:12:48.491830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.848 [2024-05-15 17:12:48.491838] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.848 [2024-05-15 17:12:48.491846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.848 [2024-05-15 17:12:48.495390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.848 [2024-05-15 17:12:48.504363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.848 [2024-05-15 17:12:48.505065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.505326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.505339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.848 [2024-05-15 17:12:48.505349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.848 [2024-05-15 17:12:48.505596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.848 [2024-05-15 17:12:48.505819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.848 [2024-05-15 17:12:48.505828] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.848 [2024-05-15 17:12:48.505835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.848 [2024-05-15 17:12:48.509382] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.848 [2024-05-15 17:12:48.518153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.848 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.848 [2024-05-15 17:12:48.518763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.519112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.519124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.848 [2024-05-15 17:12:48.519134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.848 [2024-05-15 17:12:48.519372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.848 [2024-05-15 17:12:48.519601] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.848 [2024-05-15 17:12:48.519615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.848 [2024-05-15 17:12:48.519623] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.848 [2024-05-15 17:12:48.523173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.848 [2024-05-15 17:12:48.531951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.848 [2024-05-15 17:12:48.532651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.532999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.533012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.848 [2024-05-15 17:12:48.533021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.848 [2024-05-15 17:12:48.533260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.848 [2024-05-15 17:12:48.533482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.848 [2024-05-15 17:12:48.533491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.848 [2024-05-15 17:12:48.533498] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.848 [2024-05-15 17:12:48.537043] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.848 [2024-05-15 17:12:48.545807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.848 [2024-05-15 17:12:48.546363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.546691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.848 [2024-05-15 17:12:48.546702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.848 [2024-05-15 17:12:48.546711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.848 [2024-05-15 17:12:48.546931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.848 [2024-05-15 17:12:48.547150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.848 [2024-05-15 17:12:48.547157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.848 [2024-05-15 17:12:48.547165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.848 [2024-05-15 17:12:48.550704] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.849 [2024-05-15 17:12:48.559681] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.849 [2024-05-15 17:12:48.560355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.560592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.560606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.849 [2024-05-15 17:12:48.560616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.849 [2024-05-15 17:12:48.560854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.849 [2024-05-15 17:12:48.561077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.849 [2024-05-15 17:12:48.561085] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.849 [2024-05-15 17:12:48.561097] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.849 [2024-05-15 17:12:48.564643] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.849 [2024-05-15 17:12:48.566772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:09.849 [2024-05-15 17:12:48.573638] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.849 [2024-05-15 17:12:48.574202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.574567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.574578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.849 [2024-05-15 17:12:48.574586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.849 [2024-05-15 17:12:48.574806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.849 [2024-05-15 17:12:48.575024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.849 [2024-05-15 17:12:48.575032] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.849 [2024-05-15 17:12:48.575039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.849 [2024-05-15 17:12:48.578583] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.849 [2024-05-15 17:12:48.587569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.849 [2024-05-15 17:12:48.588234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.588643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.588657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.849 [2024-05-15 17:12:48.588667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.849 [2024-05-15 17:12:48.588909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.849 [2024-05-15 17:12:48.589131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.849 [2024-05-15 17:12:48.589139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.849 [2024-05-15 17:12:48.589147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.849 [2024-05-15 17:12:48.592695] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.849 [2024-05-15 17:12:48.601381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.849 [2024-05-15 17:12:48.601996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.602319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.602329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.849 [2024-05-15 17:12:48.602337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.849 [2024-05-15 17:12:48.602561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.849 [2024-05-15 17:12:48.602781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.849 [2024-05-15 17:12:48.602789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.849 [2024-05-15 17:12:48.602802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.849 [2024-05-15 17:12:48.606339] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.849 [2024-05-15 17:12:48.615311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.849 [2024-05-15 17:12:48.615960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.616308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.616321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.849 [2024-05-15 17:12:48.616331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.849 [2024-05-15 17:12:48.616582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.849 [2024-05-15 17:12:48.616805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.849 [2024-05-15 17:12:48.616814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.849 [2024-05-15 17:12:48.616821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.849 [2024-05-15 17:12:48.620357] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.849 [2024-05-15 17:12:48.620366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.849 [2024-05-15 17:12:48.620379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.849 [2024-05-15 17:12:48.620385] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.849 [2024-05-15 17:12:48.620390] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.849 [2024-05-15 17:12:48.620394] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.849 [2024-05-15 17:12:48.620538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.849 [2024-05-15 17:12:48.620687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.849 [2024-05-15 17:12:48.620840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.849 [2024-05-15 17:12:48.629134] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.849 [2024-05-15 17:12:48.629810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.630061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.630074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.849 [2024-05-15 17:12:48.630084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.849 [2024-05-15 17:12:48.630325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.849 [2024-05-15 17:12:48.630555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.849 [2024-05-15 17:12:48.630564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.849 [2024-05-15 17:12:48.630571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.849 [2024-05-15 17:12:48.634115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.849 [2024-05-15 17:12:48.643152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.849 [2024-05-15 17:12:48.643827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.644179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.644197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.849 [2024-05-15 17:12:48.644207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.849 [2024-05-15 17:12:48.644447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.849 [2024-05-15 17:12:48.644677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.849 [2024-05-15 17:12:48.644686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.849 [2024-05-15 17:12:48.644693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.849 [2024-05-15 17:12:48.648238] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.849 [2024-05-15 17:12:48.657001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.849 [2024-05-15 17:12:48.657594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.849 [2024-05-15 17:12:48.657966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.850 [2024-05-15 17:12:48.657976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.850 [2024-05-15 17:12:48.657985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.850 [2024-05-15 17:12:48.658206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.850 [2024-05-15 17:12:48.658425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.850 [2024-05-15 17:12:48.658432] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.850 [2024-05-15 17:12:48.658439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.850 [2024-05-15 17:12:48.661978] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.850 [2024-05-15 17:12:48.670796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:09.850 [2024-05-15 17:12:48.671231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.850 [2024-05-15 17:12:48.671566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.850 [2024-05-15 17:12:48.671577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:09.850 [2024-05-15 17:12:48.671584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:09.850 [2024-05-15 17:12:48.671804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:09.850 [2024-05-15 17:12:48.672022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:09.850 [2024-05-15 17:12:48.672030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:09.850 [2024-05-15 17:12:48.672036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:09.850 [2024-05-15 17:12:48.675575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.112 [2024-05-15 17:12:48.684759] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.112 [2024-05-15 17:12:48.685448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.112 [2024-05-15 17:12:48.685808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.112 [2024-05-15 17:12:48.685823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.112 [2024-05-15 17:12:48.685837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.112 [2024-05-15 17:12:48.686077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.112 [2024-05-15 17:12:48.686299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.112 [2024-05-15 17:12:48.686307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.112 [2024-05-15 17:12:48.686315] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.112 [2024-05-15 17:12:48.689873] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.112 [2024-05-15 17:12:48.698640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.112 [2024-05-15 17:12:48.699086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.112 [2024-05-15 17:12:48.699428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.112 [2024-05-15 17:12:48.699437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.112 [2024-05-15 17:12:48.699445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.112 [2024-05-15 17:12:48.699677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.112 [2024-05-15 17:12:48.699899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.112 [2024-05-15 17:12:48.699907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.112 [2024-05-15 17:12:48.699914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.112 [2024-05-15 17:12:48.703453] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.112 [2024-05-15 17:12:48.712429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.112 [2024-05-15 17:12:48.713094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.112 [2024-05-15 17:12:48.713425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.112 [2024-05-15 17:12:48.713439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.112 [2024-05-15 17:12:48.713448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.112 [2024-05-15 17:12:48.713694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.112 [2024-05-15 17:12:48.713917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.112 [2024-05-15 17:12:48.713925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.112 [2024-05-15 17:12:48.713932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.112 [2024-05-15 17:12:48.717476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.112 [2024-05-15 17:12:48.726250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.112 [2024-05-15 17:12:48.726943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.112 [2024-05-15 17:12:48.727297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.112 [2024-05-15 17:12:48.727310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.112 [2024-05-15 17:12:48.727319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.112 [2024-05-15 17:12:48.727569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.112 [2024-05-15 17:12:48.727792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.112 [2024-05-15 17:12:48.727801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.112 [2024-05-15 17:12:48.727808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.112 [2024-05-15 17:12:48.731354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.112 [2024-05-15 17:12:48.740123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.112 [2024-05-15 17:12:48.740891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.112 [2024-05-15 17:12:48.741252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.112 [2024-05-15 17:12:48.741265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.112 [2024-05-15 17:12:48.741274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.112 [2024-05-15 17:12:48.741513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.112 [2024-05-15 17:12:48.741742] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.112 [2024-05-15 17:12:48.741752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.112 [2024-05-15 17:12:48.741759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.112 [2024-05-15 17:12:48.745302] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.112 [2024-05-15 17:12:48.754072] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.112 [2024-05-15 17:12:48.754661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.112 [2024-05-15 17:12:48.755079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.755091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.113 [2024-05-15 17:12:48.755100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.113 [2024-05-15 17:12:48.755339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.113 [2024-05-15 17:12:48.755569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.113 [2024-05-15 17:12:48.755578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.113 [2024-05-15 17:12:48.755586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.113 [2024-05-15 17:12:48.759132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.113 [2024-05-15 17:12:48.767912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.113 [2024-05-15 17:12:48.768536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.768971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.768983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.113 [2024-05-15 17:12:48.768993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.113 [2024-05-15 17:12:48.769231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.113 [2024-05-15 17:12:48.769458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.113 [2024-05-15 17:12:48.769466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.113 [2024-05-15 17:12:48.769473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.113 [2024-05-15 17:12:48.773020] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.113 [2024-05-15 17:12:48.781791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.113 [2024-05-15 17:12:48.782461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.782854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.782868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.113 [2024-05-15 17:12:48.782877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.113 [2024-05-15 17:12:48.783116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.113 [2024-05-15 17:12:48.783339] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.113 [2024-05-15 17:12:48.783347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.113 [2024-05-15 17:12:48.783354] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.113 [2024-05-15 17:12:48.786905] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.113 [2024-05-15 17:12:48.795676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.113 [2024-05-15 17:12:48.796324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.796691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.796706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.113 [2024-05-15 17:12:48.796716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.113 [2024-05-15 17:12:48.796954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.113 [2024-05-15 17:12:48.797176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.113 [2024-05-15 17:12:48.797185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.113 [2024-05-15 17:12:48.797192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.113 [2024-05-15 17:12:48.800747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.113 [2024-05-15 17:12:48.809513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.113 [2024-05-15 17:12:48.810211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.810442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.810454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.113 [2024-05-15 17:12:48.810463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.113 [2024-05-15 17:12:48.810709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.113 [2024-05-15 17:12:48.810933] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.113 [2024-05-15 17:12:48.810945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.113 [2024-05-15 17:12:48.810952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.113 [2024-05-15 17:12:48.814494] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.113 [2024-05-15 17:12:48.823469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.113 [2024-05-15 17:12:48.824177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.824536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.824555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.113 [2024-05-15 17:12:48.824565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.113 [2024-05-15 17:12:48.824803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.113 [2024-05-15 17:12:48.825025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.113 [2024-05-15 17:12:48.825033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.113 [2024-05-15 17:12:48.825040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.113 [2024-05-15 17:12:48.828589] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.113 [2024-05-15 17:12:48.837360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.113 [2024-05-15 17:12:48.838045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.838262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.838274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.113 [2024-05-15 17:12:48.838283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.113 [2024-05-15 17:12:48.838522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.113 [2024-05-15 17:12:48.838751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.113 [2024-05-15 17:12:48.838760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.113 [2024-05-15 17:12:48.838768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.113 [2024-05-15 17:12:48.842308] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.113 [2024-05-15 17:12:48.851336] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.113 [2024-05-15 17:12:48.852031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.852382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.852394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.113 [2024-05-15 17:12:48.852404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.113 [2024-05-15 17:12:48.852649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.113 [2024-05-15 17:12:48.852871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.113 [2024-05-15 17:12:48.852879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.113 [2024-05-15 17:12:48.852894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.113 [2024-05-15 17:12:48.856438] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.113 [2024-05-15 17:12:48.865210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.113 [2024-05-15 17:12:48.865917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.866274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.866287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.113 [2024-05-15 17:12:48.866297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.113 [2024-05-15 17:12:48.866535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.113 [2024-05-15 17:12:48.866763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.113 [2024-05-15 17:12:48.866773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.113 [2024-05-15 17:12:48.866780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.113 [2024-05-15 17:12:48.870334] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.113 [2024-05-15 17:12:48.879107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.113 [2024-05-15 17:12:48.879682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.880088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.113 [2024-05-15 17:12:48.880101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.113 [2024-05-15 17:12:48.880110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.113 [2024-05-15 17:12:48.880348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.113 [2024-05-15 17:12:48.880577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.113 [2024-05-15 17:12:48.880587] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.113 [2024-05-15 17:12:48.880594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.113 [2024-05-15 17:12:48.884135] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.114 [2024-05-15 17:12:48.892900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.114 [2024-05-15 17:12:48.893619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-05-15 17:12:48.893877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-05-15 17:12:48.893893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.114 [2024-05-15 17:12:48.893902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.114 [2024-05-15 17:12:48.894141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.114 [2024-05-15 17:12:48.894364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.114 [2024-05-15 17:12:48.894372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.114 [2024-05-15 17:12:48.894379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.114 [2024-05-15 17:12:48.897944] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.114 [2024-05-15 17:12:48.906710] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.114 [2024-05-15 17:12:48.907395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-05-15 17:12:48.907781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-05-15 17:12:48.907797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.114 [2024-05-15 17:12:48.907806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.114 [2024-05-15 17:12:48.908044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.114 [2024-05-15 17:12:48.908266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.114 [2024-05-15 17:12:48.908275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.114 [2024-05-15 17:12:48.908282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.114 [2024-05-15 17:12:48.911827] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.114 [2024-05-15 17:12:48.920593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.114 [2024-05-15 17:12:48.921256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-05-15 17:12:48.921625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-05-15 17:12:48.921640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.114 [2024-05-15 17:12:48.921650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.114 [2024-05-15 17:12:48.921888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.114 [2024-05-15 17:12:48.922110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.114 [2024-05-15 17:12:48.922118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.114 [2024-05-15 17:12:48.922126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.114 [2024-05-15 17:12:48.925675] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.114 [2024-05-15 17:12:48.934451] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.114 [2024-05-15 17:12:48.935136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-05-15 17:12:48.935497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.114 [2024-05-15 17:12:48.935510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.114 [2024-05-15 17:12:48.935519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.114 [2024-05-15 17:12:48.935766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.114 [2024-05-15 17:12:48.935988] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.114 [2024-05-15 17:12:48.935997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.114 [2024-05-15 17:12:48.936004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.114 [2024-05-15 17:12:48.939549] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.377 [2024-05-15 17:12:48.948314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.377 [2024-05-15 17:12:48.949010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:48.949360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:48.949373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.377 [2024-05-15 17:12:48.949383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.377 [2024-05-15 17:12:48.949628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.377 [2024-05-15 17:12:48.949851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.377 [2024-05-15 17:12:48.949859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.377 [2024-05-15 17:12:48.949866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.377 [2024-05-15 17:12:48.953409] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.377 [2024-05-15 17:12:48.962178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.377 [2024-05-15 17:12:48.962872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:48.963225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:48.963237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.377 [2024-05-15 17:12:48.963247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.377 [2024-05-15 17:12:48.963484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.377 [2024-05-15 17:12:48.963713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.377 [2024-05-15 17:12:48.963722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.377 [2024-05-15 17:12:48.963729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.377 [2024-05-15 17:12:48.967271] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.377 [2024-05-15 17:12:48.976057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.377 [2024-05-15 17:12:48.976661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:48.977070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:48.977082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.377 [2024-05-15 17:12:48.977092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.377 [2024-05-15 17:12:48.977330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.377 [2024-05-15 17:12:48.977559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.377 [2024-05-15 17:12:48.977569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.377 [2024-05-15 17:12:48.977576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.377 [2024-05-15 17:12:48.981121] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.377 [2024-05-15 17:12:48.989890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.377 [2024-05-15 17:12:48.990610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:48.991047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:48.991060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.377 [2024-05-15 17:12:48.991069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.377 [2024-05-15 17:12:48.991307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.377 [2024-05-15 17:12:48.991529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.377 [2024-05-15 17:12:48.991537] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.377 [2024-05-15 17:12:48.991544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.377 [2024-05-15 17:12:48.995093] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.377 [2024-05-15 17:12:49.003860] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.377 [2024-05-15 17:12:49.004519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:49.004893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:49.004907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.377 [2024-05-15 17:12:49.004916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.377 [2024-05-15 17:12:49.005154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.377 [2024-05-15 17:12:49.005375] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.377 [2024-05-15 17:12:49.005384] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.377 [2024-05-15 17:12:49.005391] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.377 [2024-05-15 17:12:49.008939] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.377 [2024-05-15 17:12:49.017709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.377 [2024-05-15 17:12:49.018360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:49.018637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:49.018652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.377 [2024-05-15 17:12:49.018661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.377 [2024-05-15 17:12:49.018900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.377 [2024-05-15 17:12:49.019122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.377 [2024-05-15 17:12:49.019131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.377 [2024-05-15 17:12:49.019138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.377 [2024-05-15 17:12:49.022686] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.377 [2024-05-15 17:12:49.031668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.377 [2024-05-15 17:12:49.032222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:49.032760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.377 [2024-05-15 17:12:49.032800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.377 [2024-05-15 17:12:49.032811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.378 [2024-05-15 17:12:49.033050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.378 [2024-05-15 17:12:49.033272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.378 [2024-05-15 17:12:49.033280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.378 [2024-05-15 17:12:49.033287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.378 [2024-05-15 17:12:49.036837] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.378 [2024-05-15 17:12:49.045608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.378 [2024-05-15 17:12:49.046307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.046726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.046741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.378 [2024-05-15 17:12:49.046751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.378 [2024-05-15 17:12:49.046989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.378 [2024-05-15 17:12:49.047211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.378 [2024-05-15 17:12:49.047220] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.378 [2024-05-15 17:12:49.047227] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.378 [2024-05-15 17:12:49.050774] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.378 [2024-05-15 17:12:49.059573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.378 [2024-05-15 17:12:49.060140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.060370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.060379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.378 [2024-05-15 17:12:49.060387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.378 [2024-05-15 17:12:49.060613] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.378 [2024-05-15 17:12:49.060833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.378 [2024-05-15 17:12:49.060840] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.378 [2024-05-15 17:12:49.060847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.378 [2024-05-15 17:12:49.064385] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.378 [2024-05-15 17:12:49.073364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.378 [2024-05-15 17:12:49.074019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.074368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.074380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.378 [2024-05-15 17:12:49.074394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.378 [2024-05-15 17:12:49.074640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.378 [2024-05-15 17:12:49.074863] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.378 [2024-05-15 17:12:49.074871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.378 [2024-05-15 17:12:49.074879] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.378 [2024-05-15 17:12:49.078421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.378 [2024-05-15 17:12:49.087187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.378 [2024-05-15 17:12:49.087846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.088195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.088207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.378 [2024-05-15 17:12:49.088216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.378 [2024-05-15 17:12:49.088455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.378 [2024-05-15 17:12:49.088683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.378 [2024-05-15 17:12:49.088692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.378 [2024-05-15 17:12:49.088700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.378 [2024-05-15 17:12:49.092240] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.378 [2024-05-15 17:12:49.101015] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.378 [2024-05-15 17:12:49.101555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.101934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.101944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.378 [2024-05-15 17:12:49.101951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.378 [2024-05-15 17:12:49.102169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.378 [2024-05-15 17:12:49.102387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.378 [2024-05-15 17:12:49.102395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.378 [2024-05-15 17:12:49.102401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.378 [2024-05-15 17:12:49.105949] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.378 [2024-05-15 17:12:49.115014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.378 [2024-05-15 17:12:49.115456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.115657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.115667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.378 [2024-05-15 17:12:49.115674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.378 [2024-05-15 17:12:49.115898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.378 [2024-05-15 17:12:49.116117] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.378 [2024-05-15 17:12:49.116124] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.378 [2024-05-15 17:12:49.116131] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.378 [2024-05-15 17:12:49.119672] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.378 [2024-05-15 17:12:49.128848] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.378 [2024-05-15 17:12:49.129421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.129754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.129765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.378 [2024-05-15 17:12:49.129772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.378 [2024-05-15 17:12:49.129991] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.378 [2024-05-15 17:12:49.130209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.378 [2024-05-15 17:12:49.130216] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.378 [2024-05-15 17:12:49.130223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.378 [2024-05-15 17:12:49.133763] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.378 [2024-05-15 17:12:49.142731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.378 [2024-05-15 17:12:49.143283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.143698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.143708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.378 [2024-05-15 17:12:49.143715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.378 [2024-05-15 17:12:49.143933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.378 [2024-05-15 17:12:49.144151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.378 [2024-05-15 17:12:49.144159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.378 [2024-05-15 17:12:49.144165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.378 [2024-05-15 17:12:49.147703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.378 [2024-05-15 17:12:49.156674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.378 [2024-05-15 17:12:49.157280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.157543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.157557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.378 [2024-05-15 17:12:49.157565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.378 [2024-05-15 17:12:49.157783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.378 [2024-05-15 17:12:49.158006] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.378 [2024-05-15 17:12:49.158013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.378 [2024-05-15 17:12:49.158021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.378 [2024-05-15 17:12:49.161561] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.378 [2024-05-15 17:12:49.170541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.378 [2024-05-15 17:12:49.171205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.378 [2024-05-15 17:12:49.171475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.379 [2024-05-15 17:12:49.171487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.379 [2024-05-15 17:12:49.171497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.379 [2024-05-15 17:12:49.171744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.379 [2024-05-15 17:12:49.171967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.379 [2024-05-15 17:12:49.171975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.379 [2024-05-15 17:12:49.171983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.379 [2024-05-15 17:12:49.175525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.379 [2024-05-15 17:12:49.184495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.379 [2024-05-15 17:12:49.185054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.379 [2024-05-15 17:12:49.185380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.379 [2024-05-15 17:12:49.185390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.379 [2024-05-15 17:12:49.185398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.379 [2024-05-15 17:12:49.185622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.379 [2024-05-15 17:12:49.185841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.379 [2024-05-15 17:12:49.185850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.379 [2024-05-15 17:12:49.185858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.379 [2024-05-15 17:12:49.189395] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.379 [2024-05-15 17:12:49.198368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.379 [2024-05-15 17:12:49.199021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.379 [2024-05-15 17:12:49.199304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.379 [2024-05-15 17:12:49.199316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.379 [2024-05-15 17:12:49.199326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.379 [2024-05-15 17:12:49.199572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.379 [2024-05-15 17:12:49.199795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.379 [2024-05-15 17:12:49.199807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.379 [2024-05-15 17:12:49.199815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.379 [2024-05-15 17:12:49.203357] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.640 [2024-05-15 17:12:49.212332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.640 [2024-05-15 17:12:49.212774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.640 [2024-05-15 17:12:49.213113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.640 [2024-05-15 17:12:49.213123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.640 [2024-05-15 17:12:49.213130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.640 [2024-05-15 17:12:49.213349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.640 [2024-05-15 17:12:49.213572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.640 [2024-05-15 17:12:49.213581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.640 [2024-05-15 17:12:49.213587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.640 [2024-05-15 17:12:49.217128] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.640 [2024-05-15 17:12:49.226306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.640 [2024-05-15 17:12:49.226916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.227246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.227256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.641 [2024-05-15 17:12:49.227263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.641 [2024-05-15 17:12:49.227481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.641 [2024-05-15 17:12:49.227703] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.641 [2024-05-15 17:12:49.227711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.641 [2024-05-15 17:12:49.227718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.641 [2024-05-15 17:12:49.231254] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.641 [2024-05-15 17:12:49.240221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.641 [2024-05-15 17:12:49.240926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.241301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.241314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.641 [2024-05-15 17:12:49.241323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.641 [2024-05-15 17:12:49.241569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.641 [2024-05-15 17:12:49.241791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.641 [2024-05-15 17:12:49.241800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.641 [2024-05-15 17:12:49.241812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.641 [2024-05-15 17:12:49.245355] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.641 [2024-05-15 17:12:49.254117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.641 [2024-05-15 17:12:49.254656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.254876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.254888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.641 [2024-05-15 17:12:49.254898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.641 [2024-05-15 17:12:49.255136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.641 [2024-05-15 17:12:49.255358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.641 [2024-05-15 17:12:49.255367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.641 [2024-05-15 17:12:49.255374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:10.641 [2024-05-15 17:12:49.258920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:10.641 [2024-05-15 17:12:49.267938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.641 [2024-05-15 17:12:49.268554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.268886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.268896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.641 [2024-05-15 17:12:49.268904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.641 [2024-05-15 17:12:49.269124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.641 [2024-05-15 17:12:49.269342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.641 [2024-05-15 17:12:49.269350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.641 [2024-05-15 17:12:49.269357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.641 [2024-05-15 17:12:49.272904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.641 [2024-05-15 17:12:49.281871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.641 [2024-05-15 17:12:49.282577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.282929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.282941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.641 [2024-05-15 17:12:49.282951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.641 [2024-05-15 17:12:49.283190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.641 [2024-05-15 17:12:49.283417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.641 [2024-05-15 17:12:49.283425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.641 [2024-05-15 17:12:49.283433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.641 [2024-05-15 17:12:49.286981] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.641 [2024-05-15 17:12:49.295748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.641 [2024-05-15 17:12:49.296316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.296678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.296689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.641 [2024-05-15 17:12:49.296696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.641 [2024-05-15 17:12:49.296916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.641 [2024-05-15 17:12:49.297135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.641 [2024-05-15 17:12:49.297142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.641 [2024-05-15 17:12:49.297149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.641 [2024-05-15 17:12:49.300690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:10.641 [2024-05-15 17:12:49.303501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.641 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:10.641 [2024-05-15 17:12:49.309704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.641 [2024-05-15 17:12:49.310278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.310490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.310499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.641 [2024-05-15 17:12:49.310507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.641 [2024-05-15 17:12:49.310729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.641 [2024-05-15 17:12:49.310949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.641 [2024-05-15 17:12:49.310956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.641 [2024-05-15 17:12:49.310963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.641 [2024-05-15 17:12:49.314502] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.641 [2024-05-15 17:12:49.323684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.641 [2024-05-15 17:12:49.324286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.324612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.324622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.641 [2024-05-15 17:12:49.324629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.641 [2024-05-15 17:12:49.324848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.641 [2024-05-15 17:12:49.325066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.641 [2024-05-15 17:12:49.325073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.641 [2024-05-15 17:12:49.325080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.641 [2024-05-15 17:12:49.328620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.641 [2024-05-15 17:12:49.337598] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.641 [2024-05-15 17:12:49.338162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.338368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.641 [2024-05-15 17:12:49.338377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.642 [2024-05-15 17:12:49.338384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.642 [2024-05-15 17:12:49.338608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.642 [2024-05-15 17:12:49.338828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.642 [2024-05-15 17:12:49.338835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.642 [2024-05-15 17:12:49.338842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.642 [2024-05-15 17:12:49.342375] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.642 Malloc0 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:10.642 [2024-05-15 17:12:49.351552] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.642 [2024-05-15 17:12:49.352131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.642 [2024-05-15 17:12:49.352477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.642 [2024-05-15 17:12:49.352486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.642 [2024-05-15 17:12:49.352494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.642 [2024-05-15 17:12:49.352717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.642 [2024-05-15 17:12:49.352936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.642 [2024-05-15 17:12:49.352944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.642 [2024-05-15 17:12:49.352955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.642 [2024-05-15 17:12:49.356487] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:10.642 [2024-05-15 17:12:49.365453] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.642 [2024-05-15 17:12:49.366007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.642 [2024-05-15 17:12:49.366385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.642 [2024-05-15 17:12:49.366395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e750 with addr=10.0.0.2, port=4420 00:28:10.642 [2024-05-15 17:12:49.366402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e750 is same with the state(5) to be set 00:28:10.642 [2024-05-15 17:12:49.366625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e750 (9): Bad file descriptor 00:28:10.642 [2024-05-15 17:12:49.366844] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:10.642 [2024-05-15 17:12:49.366852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:10.642 [2024-05-15 17:12:49.366858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.642 [2024-05-15 17:12:49.370406] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:10.642 [2024-05-15 17:12:49.376747] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:10.642 [2024-05-15 17:12:49.376934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.642 [2024-05-15 17:12:49.379377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.642 17:12:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1638457 00:28:10.642 [2024-05-15 17:12:49.417041] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:20.643 00:28:20.643 Latency(us) 00:28:20.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.643 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:20.643 Verification LBA range: start 0x0 length 0x4000 00:28:20.643 Nvme1n1 : 15.01 8365.72 32.68 9498.93 0.00 7138.81 791.89 16384.00 00:28:20.643 =================================================================================================================== 00:28:20.643 Total : 8365.72 32.68 9498.93 0.00 7138.81 791.89 16384.00 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:20.644 rmmod nvme_tcp 00:28:20.644 rmmod nvme_fabrics 00:28:20.644 rmmod nvme_keyring 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1639757 ']' 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1639757 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 1639757 ']' 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 1639757 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1639757 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1639757' 00:28:20.644 killing process with pid 1639757 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 1639757 00:28:20.644 [2024-05-15 17:12:58.207888] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 1639757 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.644 17:12:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.586 17:13:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:21.586 00:28:21.586 real 0m27.764s 00:28:21.586 user 1m3.340s 00:28:21.586 sys 0m7.001s 00:28:21.586 17:13:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:21.586 17:13:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:21.586 ************************************ 00:28:21.586 END TEST nvmf_bdevperf 00:28:21.586 ************************************ 00:28:21.848 17:13:00 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:21.848 17:13:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:21.848 17:13:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:21.848 17:13:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:21.848 ************************************ 00:28:21.848 START TEST nvmf_target_disconnect 00:28:21.848 ************************************ 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:21.848 * Looking for test storage... 00:28:21.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:21.848 17:13:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:29.997 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:29.997 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:29.997 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:29.997 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.997 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:29.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:28:29.998 00:28:29.998 --- 10.0.0.2 ping statistics --- 00:28:29.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.998 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:28:29.998 00:28:29.998 --- 10.0.0.1 ping statistics --- 00:28:29.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.998 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:29.998 ************************************ 00:28:29.998 START TEST nvmf_target_disconnect_tc1 00:28:29.998 ************************************ 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.998 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.998 [2024-05-15 17:13:07.859071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.998 [2024-05-15 17:13:07.859500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.998 [2024-05-15 17:13:07.859514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1657b70 with addr=10.0.0.2, port=4420 00:28:29.998 [2024-05-15 17:13:07.859540] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:29.998 [2024-05-15 17:13:07.859558] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:29.998 [2024-05-15 17:13:07.859566] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:29.998 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:29.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:29.998 Initializing NVMe Controllers 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:29.998 00:28:29.998 real 0m0.115s 00:28:29.998 user 0m0.057s 00:28:29.998 sys 0m0.057s 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.998 ************************************ 00:28:29.998 END TEST nvmf_target_disconnect_tc1 00:28:29.998 ************************************ 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:29.998 ************************************ 00:28:29.998 START TEST nvmf_target_disconnect_tc2 00:28:29.998 ************************************ 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1645671 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1645671 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1645671 ']' 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:29.998 17:13:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.998 [2024-05-15 17:13:07.986316] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:28:29.998 [2024-05-15 17:13:07.986380] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.998 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.998 [2024-05-15 17:13:08.073455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.998 [2024-05-15 17:13:08.164224] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.998 [2024-05-15 17:13:08.164281] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.998 [2024-05-15 17:13:08.164289] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.998 [2024-05-15 17:13:08.164296] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.998 [2024-05-15 17:13:08.164302] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.998 [2024-05-15 17:13:08.164466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:29.998 [2024-05-15 17:13:08.164621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:29.998 [2024-05-15 17:13:08.164793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:29.998 [2024-05-15 17:13:08.164794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:29.999 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:29.999 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:29.999 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:29.999 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:29.999 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.999 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.999 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:29.999 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.999 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.259 Malloc0 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.259 [2024-05-15 17:13:08.855522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.259 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.260 [2024-05-15 17:13:08.895591] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:30.260 [2024-05-15 17:13:08.895956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.260 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.260 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:30.260 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.260 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.260 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.260 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1645785 00:28:30.260 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:30.260 17:13:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.260 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.177 17:13:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1645671 00:28:32.177 17:13:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Write completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 Read completed with error (sct=0, sc=8) 00:28:32.177 starting I/O failed 00:28:32.177 [2024-05-15 17:13:10.929706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.177 [2024-05-15 17:13:10.930115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.930464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.930475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.930913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.931261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.931275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.931779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.932121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.932134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.932491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.932700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.932713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.933015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.933300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.933311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.933611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.933985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.933995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.934344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.934685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.934695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.934982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.935141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.935152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.935452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.935849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.935859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.936221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.936410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.936421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.936762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.937097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.937107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.937185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.937505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.937515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.937855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.938170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.938180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.938490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.938807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.938817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.939192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.939441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.939451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.939674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.940034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.940043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.940346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.940539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.940554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.940970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.941302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.941312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.941647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.941849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.941857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.942180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.942490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.942500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.942817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.943217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.943227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.943537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.943864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.943874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.177 [2024-05-15 17:13:10.944142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.944479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.177 [2024-05-15 17:13:10.944488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.177 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.944830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.945164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.945174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.945500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.945814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.945824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.946159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.946475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.946484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.946789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.947134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.947143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.947367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.947592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.947602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.947922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.948193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.948202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.948461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.948754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.948763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.949051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.949333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.949343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.949685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.949956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.949965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.950362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.950693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.950702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.951045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.951350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.951358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.951686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.951873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.951881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.952248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.952431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.952440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.952677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.953001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.953010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.953363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.953522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.953531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.953739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.954023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.954032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.954239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.954543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.954557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.954878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.955284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.955293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.955473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.955645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.955656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.955970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.956287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.956297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.956528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.956686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.956696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.957053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.957409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.957420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.957746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.958052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.958063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.958378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.958704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.958716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.959004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.959330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.959342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.959666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.959996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.960007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.960215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.960515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.960526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.960992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.961305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.961316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.961606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.961946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.961958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.962299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.962551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.962562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.962876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.963193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.963204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.963522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.963812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.963824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.964125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.964323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.964336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.964675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.964924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.964935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.965241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.965585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.965597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.965816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.966143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.966155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.966500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.966817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.966829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.967140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.967437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.967447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.967826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.968203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.968214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.968519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.968847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.968859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.969154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.969484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.969496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.969831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.970189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.970201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.970557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.970943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.970955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.971261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.971620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.971636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.972006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.972241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.972256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.972569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.972989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.973003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.973314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.973492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.973509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.973906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.974259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.974275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.974526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.974848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.974863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.975190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.975399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.975417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.975763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.976123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.976138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.976486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.976832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.976847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.977207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.977445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.977460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.977665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.978091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.978107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.978443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.978751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.978770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.178 [2024-05-15 17:13:10.979048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.979379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.178 [2024-05-15 17:13:10.979394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.178 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.979587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.979901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.979917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.980280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.980639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.980655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.980992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.981327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.981344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.981709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.982068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.982084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.982497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.982822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.982838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.983046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.983381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.983396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.983600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.983909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.983924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.984268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.984505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.984524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.984944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.985273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.985296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.985674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.985998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.986016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.986353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.986688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.986708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.986957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.987319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.987338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.987714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.988018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.988036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.988380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.988619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.988640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.989068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.989401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.989419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.989767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.990133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.990152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.990499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.990829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.990849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.991063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.991462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.991481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.991825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.992178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.992202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.992536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.992791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.992810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.993127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.993476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.993494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.993820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.994153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.994172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.994505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.994890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.994910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.995263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.995619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.995647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.996050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.996382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.996408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.996758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.997140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.997165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.997523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.997894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.997921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.998334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.998690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.998716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.999053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.999242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.999271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:10.999637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.999962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:10.999988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.000359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.000707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.000735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.001091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.001439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.001465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.001813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.002227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.002253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.002606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.002876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.002901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.003223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.003454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.003480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.003781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.004133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.004159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.004415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.004780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.004808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.005056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.005424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.005450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.005721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.006069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.006096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.006444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.006771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.006798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.007175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.007502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.007528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.007943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.008278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.008304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.008643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.009013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.179 [2024-05-15 17:13:11.009039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.179 qpair failed and we were unable to recover it. 00:28:32.179 [2024-05-15 17:13:11.009406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.009735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.009763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.010109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.010440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.010466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.010877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.011225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.011250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.011614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.011978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.012004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.012370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.012733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.012761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.013119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.013470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.013496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.013794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.014149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.014175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.014435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.014805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.014832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.015198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.015560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.015587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.015821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.016188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.016214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.016573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.016928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.016953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.017216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.017541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.017590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.445 qpair failed and we were unable to recover it. 00:28:32.445 [2024-05-15 17:13:11.017957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.018281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.445 [2024-05-15 17:13:11.018308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.018676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.019042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.019068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.019407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.019746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.019773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.020120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.020453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.020479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.020899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.021284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.021309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.021669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.022039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.022065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.022275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.022652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.022679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.023073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.023401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.023427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.023786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.024152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.024177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.024535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.024882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.024910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.025305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.025672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.025701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.026079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.026309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.026338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.026693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.027063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.027089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.027500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.027888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.027915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.028286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.028568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.028596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.028972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.029315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.029340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.029719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.030072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.030098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.030482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.030817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.030844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.031209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.031537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.031572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.031941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.032275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.032300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.032646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.033025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.033050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.033415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.033794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.033822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.034219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.034608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.034635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.035009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.035366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.035391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.035767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.036127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.036153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.446 qpair failed and we were unable to recover it. 00:28:32.446 [2024-05-15 17:13:11.036495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.446 [2024-05-15 17:13:11.036820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.036847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.037191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.037544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.037582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.037827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.038205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.038232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.038605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.038955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.038982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.039346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.039698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.039725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.040093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.040447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.040472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.040881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.041226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.041252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.041628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.041974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.041999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.042367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.042731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.042758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.043188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.043516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.043542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.043951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.044302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.044328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.044693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.045030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.045056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.045423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.045767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.045795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.046184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.046543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.046582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.046842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.047080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.047107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.047518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.047766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.047793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.048028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.048377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.048403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.048762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.049009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.049039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.049382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.049804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.049831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.050126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.050417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.050443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.050834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.051207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.051233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.051479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.051823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.051851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.052111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.052456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.052482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.052799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.053163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.053189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.053540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.053954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.053980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.054324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.054579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.054606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.447 [2024-05-15 17:13:11.054947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.055298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.447 [2024-05-15 17:13:11.055325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.447 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.055607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.055860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.055890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.056242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.056621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.056648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.057037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.057388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.057414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.057777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.058130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.058156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.058504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.058725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.058753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.059087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.059455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.059481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.059852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.060201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.060228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.060575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.060904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.060932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.061301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.061646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.061674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.061946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.062322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.062348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.062727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.063090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.063116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.063508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.063823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.063852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.064263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.064588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.064620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.065063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.065307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.065336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.065692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.066044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.066070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.066410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.066750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.066777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.067172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.067522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.067558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.067957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.068317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.068343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.068710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.069060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.069087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.069441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.069788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.069814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.070214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.070596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.070624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.071002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.071331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.071358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.071745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.072072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.072100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.072464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.072764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.072792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.073152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.073496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.073522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.073937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.074281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.448 [2024-05-15 17:13:11.074307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.448 qpair failed and we were unable to recover it. 00:28:32.448 [2024-05-15 17:13:11.074592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.075067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.075093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.075456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.075790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.075818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.076185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.076401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.076426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.076740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.077101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.077128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.077491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.077900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.077927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.078282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.078642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.078669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.079085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.079459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.079485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.079834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.080179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.080205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.080572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.080984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.081010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.081355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.081612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.081638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.082029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.082366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.082392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.082765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.083106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.083132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.083494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.083748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.083779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.084144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.084475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.084501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.084820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.085195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.085221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.085566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.085812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.085838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.086216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.086554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.086587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.086942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.087286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.087312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.087666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.088016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.088042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.088316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.088568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.088596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.088965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.089306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.089332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.089573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.089946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.089973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.449 [2024-05-15 17:13:11.090231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.090577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.449 [2024-05-15 17:13:11.090605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.449 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.090962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.091362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.091388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.091726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.091951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.091980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.092347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.092721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.092749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.093130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.093520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.093561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.093935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.094315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.094341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.094697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.095047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.095073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.095439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.095783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.095811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.096173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.096539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.096574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.096932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.097270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.097296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.097648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.098006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.098032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.098396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.098808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.098836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.099094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.099426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.099453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.099695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.100061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.100087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.102397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.102712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.102755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.450 qpair failed and we were unable to recover it. 00:28:32.450 [2024-05-15 17:13:11.103110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.103459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.450 [2024-05-15 17:13:11.103485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.451 qpair failed and we were unable to recover it. 00:28:32.451 [2024-05-15 17:13:11.103804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.104015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.104041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.451 qpair failed and we were unable to recover it. 00:28:32.451 [2024-05-15 17:13:11.104395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.104741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.104768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.451 qpair failed and we were unable to recover it. 00:28:32.451 [2024-05-15 17:13:11.105133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.105465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.105491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.451 qpair failed and we were unable to recover it. 00:28:32.451 [2024-05-15 17:13:11.105864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.106225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.106250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.451 qpair failed and we were unable to recover it. 00:28:32.451 [2024-05-15 17:13:11.106612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.106876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.106900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.451 qpair failed and we were unable to recover it. 00:28:32.451 [2024-05-15 17:13:11.107155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.107284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.107309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.451 qpair failed and we were unable to recover it. 00:28:32.451 [2024-05-15 17:13:11.107655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.108033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.108059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.451 qpair failed and we were unable to recover it. 00:28:32.451 [2024-05-15 17:13:11.108412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.108758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.108786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.451 qpair failed and we were unable to recover it. 00:28:32.451 [2024-05-15 17:13:11.109143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.109382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.109413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.451 qpair failed and we were unable to recover it. 00:28:32.451 [2024-05-15 17:13:11.109768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.110102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.110128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.451 qpair failed and we were unable to recover it. 00:28:32.451 [2024-05-15 17:13:11.110508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.451 [2024-05-15 17:13:11.110857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.110884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.111323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.111701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.111728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.111996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.112365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.112392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.112778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.113134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.113160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.113529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.113996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.114024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.114382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.114869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.114896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.115333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.115580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.115607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.115962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.116407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.116433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.116809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.117170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.117196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.117468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.117861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.117888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.118237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.118490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.118516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.118861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.119235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.119262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.119642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.119940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.119966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.120326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.120675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.120702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.121083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.121462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.121487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.121856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.122215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.122240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.122598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.122860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.122888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.123190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.123615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.123642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.124033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.124390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.124415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.124630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.124950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.124976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.125331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.125685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.125711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.126129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.126454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.126480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.126744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.127087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.127113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.127560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.127971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.127997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.128249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.128498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.128523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.128927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.129300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.129326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.452 [2024-05-15 17:13:11.129596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.129966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.452 [2024-05-15 17:13:11.129992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.452 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.130359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.130719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.130747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.131133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.131381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.131410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.131765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.132121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.132147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.132518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.132896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.132925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.133290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.133658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.133686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.134087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.134435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.134461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.134705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.135058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.135085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.135450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.135801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.135828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.136200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.136469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.136495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.136762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.137142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.137168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.137586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.137817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.137844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.138176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.138530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.138568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.138917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.139275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.139302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.139682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.140049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.140076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.140455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.140793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.140820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.141169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.141540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.141576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.141935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.142198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.142224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.142583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.142984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.143011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.143391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.143632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.143660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.144015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.144372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.144399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.144772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.145133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.145160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.145520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.145939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.145966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.146328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.146666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.146694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.146929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.147268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.147294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.147538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.147911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.147937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.148297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.148636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.148665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.453 qpair failed and we were unable to recover it. 00:28:32.453 [2024-05-15 17:13:11.149013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.453 [2024-05-15 17:13:11.149448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.149475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.149848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.150225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.150251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.150666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.150922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.150954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.151310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.151724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.151750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.152108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.152479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.152505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.152873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.153213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.153239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.153580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.153914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.153942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.154304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.154617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.154645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.155009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.155334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.155360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.155632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.156034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.156060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.156378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.156714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.156740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.157109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.157481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.157506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.157807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.158180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.158206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.158583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.158921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.158946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.159323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.159688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.159716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.159967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.160306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.160331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.160702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.161084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.161111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.161361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.161725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.161751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.162124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.162482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.162508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.162802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.163155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.163181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.163562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.163698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.163723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.164106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.164453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.164480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.164826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.165131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.165158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.165473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.165721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.165747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.165989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.166348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.166375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.166749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.166996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.167026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.454 qpair failed and we were unable to recover it. 00:28:32.454 [2024-05-15 17:13:11.167334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.454 [2024-05-15 17:13:11.167692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.167720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.167966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.168326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.168352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.168723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.169191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.169217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.169581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.169904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.169930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.170274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.170608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.170635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.170963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.171398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.171424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.171776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.172135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.172161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.172402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.172675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.172711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.173026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.173171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.173196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.173585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.173852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.173880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.174234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.174468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.174494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.174870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.175116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.175152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.175373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.175740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.175768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.176153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.176396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.176426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.176787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.177023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.177049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.177442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.177664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.177691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.178037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.178388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.178422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.178855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.179214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.179241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.179471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.179788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.179815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.180251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.180632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.180660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.181034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.181398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.181424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.181776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.182150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.182176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.182407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.182790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.182817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.183200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.183532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.183567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.183942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.184285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.184311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.455 qpair failed and we were unable to recover it. 00:28:32.455 [2024-05-15 17:13:11.184755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.455 [2024-05-15 17:13:11.185099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.185124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.185469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.185835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.185862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.186227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.186573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.186601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.186842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.187085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.187114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.187453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.187782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.187809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.188178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.188419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.188445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.188841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.189186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.189212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.189591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.189890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.189916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.190297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.190674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.190701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.191061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.191416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.191442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.191709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.192136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.192162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.192530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.192890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.192918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.193285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.193675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.193703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.193965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.194335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.194360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.194675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.195034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.195060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.195409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.195876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.195903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.196275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.196625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.196652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.197021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.197305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.197331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.197691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.198048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.198075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.198447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.198778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.198805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.199171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.199514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.199540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.456 qpair failed and we were unable to recover it. 00:28:32.456 [2024-05-15 17:13:11.199910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.456 [2024-05-15 17:13:11.200245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.200270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.200517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.200868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.200895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.201247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.201693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.201720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.202087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.202442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.202467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.202834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.203171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.203210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.203587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.203953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.203979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.204327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.204672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.204700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.205056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.205413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.205438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.205701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.206093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.206119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.206470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.206827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.206854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.207222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.207579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.207606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.207963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.208304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.208330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.208702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.208943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.208969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.209345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.209671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.209699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.210054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.210370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.210401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.210767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.211110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.211135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.211505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.211858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.211886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.212263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.212600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.212627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.213010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.213364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.213390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.213739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.214074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.214100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.214456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.214705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.214732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.215121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.215478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.215504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.216046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.216379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.216406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.216782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.217127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.217153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.217487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.217825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.217857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.457 [2024-05-15 17:13:11.218207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.218567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.457 [2024-05-15 17:13:11.218596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.457 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.218966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.219300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.219326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.219617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.219972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.219998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.220372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.220720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.220747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.221141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.221468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.221494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.221759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.222138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.222164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.222532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.222798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.222828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.223199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.223533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.223570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.223956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.224192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.224220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.224572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.224928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.224960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.225323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.225673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.225700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.226079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.226363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.226398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.226754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.227066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.227093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.227476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.227833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.227861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.228225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.228573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.228600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.228947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.229297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.229323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.229705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.230063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.230089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.230405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.230743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.230770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.231137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.231503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.231528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.231899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.232229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.232255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.232607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.232958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.232984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.233336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.233691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.233718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.234080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.234424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.234450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.234798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.235147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.235172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.235523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.235883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.235910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.236264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.236656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.236684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.237053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.237393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.237419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.458 qpair failed and we were unable to recover it. 00:28:32.458 [2024-05-15 17:13:11.237768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.458 [2024-05-15 17:13:11.238133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.238159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.238555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.238881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.238907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.239275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.239520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.239555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.240010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.240246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.240280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.240630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.240986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.241012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.241353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.241711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.241738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.242085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.242450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.242475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.242830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.243188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.243214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.243604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.243989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.244015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.244384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.244532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.244571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.244714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.244954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.244980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.245357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.245641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.245669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.246027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.246363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.246388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.246539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.246812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.246841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.247047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.247473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.247499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.247870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.248237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.248263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.248608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.248965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.248991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.249320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.249704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.249731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.250110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.250443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.250470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.250836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.251183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.251209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.251579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.251937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.251962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.252330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.252597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.252624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.253013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.253358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.253384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.253742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.254115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.254141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.254510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.254898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.254925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.255289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.255635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.255663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.256024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.256370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.256395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.256755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.257111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.257137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.257483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.257849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.459 [2024-05-15 17:13:11.257878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.459 qpair failed and we were unable to recover it. 00:28:32.459 [2024-05-15 17:13:11.258252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.258587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.258614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.258985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.259323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.259349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.259730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.260089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.260115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.260479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.260840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.260866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.261209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.261535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.261597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.261948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.262288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.262314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.262680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.263033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.263059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.263431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.263675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.263705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.264082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.264500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.264526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.264783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.265133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.265159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.265527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.265890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.265917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.266285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.266685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.266714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.266965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.267337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.267363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.267742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.268082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.268109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.268473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.268830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.268857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.269218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.269569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.269596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.269988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.270347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.270374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.270634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.271003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.271029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.271394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.271782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.271811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.272164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.272532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.272569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.272950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.273288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.273313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.460 [2024-05-15 17:13:11.273661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.274026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.460 [2024-05-15 17:13:11.274052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.460 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.274406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.274743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.274770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.275131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.275500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.275526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.275955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.276305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.276331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.276750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.277106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.277132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.277498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.277838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.277865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.278225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.278581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.278608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.278955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.279314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.279340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.279733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.280097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.280123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.280488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.280870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.280897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.281265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.281629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.281656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.282036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.282384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.282410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.282689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.283038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.283064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.283432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.283760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.283786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.284149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.284495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.284520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.284877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.285236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.285262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.285507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.285730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.285759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.286151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.286506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.286533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.287009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.287356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.287383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.287731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.288088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.288115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.732 qpair failed and we were unable to recover it. 00:28:32.732 [2024-05-15 17:13:11.288554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.732 [2024-05-15 17:13:11.288924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.288950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.289213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.289576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.289603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.289995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.290352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.290377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.290636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.291090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.291116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.291462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.291847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.291874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.292312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.292650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.292678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.293037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.293390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.293416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.293775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.294138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.294164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.294557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.294952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.294978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.295301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.295657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.295685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.296043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.296400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.296426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.296794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.297167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.297194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.297568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.300012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.300074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.300516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.300923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.300952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.301402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.301649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.301676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.302065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.302316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.302341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.302686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.303094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.303120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.303381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.303803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.303831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.304205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.304566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.304594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.304939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.305277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.305302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.305680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.305917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.305946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.306308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.306645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.306672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.307041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.307400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.307426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.307788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.308177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.308203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.308593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.308984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.309009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.309368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.309708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.309736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.310104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.310493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.310518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.310871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.311232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.311258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.311594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.311812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.311839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.733 qpair failed and we were unable to recover it. 00:28:32.733 [2024-05-15 17:13:11.312178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.312526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.733 [2024-05-15 17:13:11.312562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.312889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.313259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.313285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.313663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.313987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.314013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.314392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.314740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.314767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.315202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.315573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.315605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.315921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.316300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.316325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.316685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.317093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.317118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.317376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.317760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.317788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.318158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.318513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.318539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.318884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.319241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.319267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.319618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.319973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.319999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.320370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.320730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.320757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.321131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.321469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.321495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.321889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.322236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.322263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.322597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.322984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.323015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.323408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.323772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.323799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.324169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.324526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.324562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.324852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.325213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.325239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.325600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.325980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.326006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.326385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.326721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.326750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.327123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.327492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.327518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.327885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.328243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.328269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.328624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.328876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.328903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.329300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.329561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.329589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.329968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.330370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.330400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.330746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.330993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.331023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.331386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.331785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.331812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.332248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.332597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.332625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.332993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.333300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.333327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.333656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.334058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.734 [2024-05-15 17:13:11.334084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.734 qpair failed and we were unable to recover it. 00:28:32.734 [2024-05-15 17:13:11.334468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.334882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.334910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.335325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.335650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.335677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.336055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.336414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.336441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.336788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.337153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.337180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.337584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.337948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.337980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.338366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.338607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.338638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.338994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.339339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.339365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.339721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.340102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.340127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.340485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.340884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.340910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.341144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.341407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.341436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.341807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.342196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.342222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.342589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.342993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.343020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.343391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.343749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.343776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.344158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.344496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.344521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.344780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.345165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.345192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.345540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.345886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.345913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.346285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.346629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.346657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.347032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.347375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.347401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.347792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.348214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.348239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.348610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.348988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.349013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.349390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.349746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.349774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.350135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.350469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.350495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.350919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.351282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.351310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.351675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.352043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.352069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.352382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.352609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.352639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.353014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.353388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.353415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.353766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.354133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.354160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.354536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.354915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.354941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.735 qpair failed and we were unable to recover it. 00:28:32.735 [2024-05-15 17:13:11.355179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.735 [2024-05-15 17:13:11.355533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.355573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.355939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.356290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.356317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.356755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.357119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.357145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.357508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.357884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.357911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.358283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.358640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.358668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.359073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.359302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.359331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.359774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.360094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.360120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.360570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.360796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.360821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.361213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.361566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.361593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.361944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.362318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.362344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.362712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.363079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.363105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.363485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.363854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.363881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.364223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.364579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.364607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.364897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.365244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.365271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.365565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.365961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.365988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.366357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.366726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.366754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.367143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.367369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.367398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.367712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.368089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.368115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.368469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.368798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.368826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.369263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.369503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.369528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.369931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.370276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.370302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.370679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.371048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.371074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.371457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.371836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.371864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.372114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.372364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.372389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.736 qpair failed and we were unable to recover it. 00:28:32.736 [2024-05-15 17:13:11.372807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.736 [2024-05-15 17:13:11.373188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.373213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.373584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.373936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.373961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.374332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.374725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.374753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.375129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.375491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.375516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.375884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.376249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.376276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.376656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.377013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.377038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.377382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.377746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.377773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.378149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.378403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.378429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.378804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.379165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.379190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.379591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.379971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.379997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.380344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.380701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.380728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.381090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.381445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.381471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.381882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.382251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.382277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.382666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.383022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.383048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.383414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.383755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.383782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.384153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.384558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.384586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.384993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.385335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.385362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.385717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.386122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.386148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.386497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.386869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.386898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.387263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.387629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.387656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.388027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.388478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.388505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.388881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.389262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.389290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.389574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.389955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.389982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.390359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.390728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.390756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.391114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.391498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.391524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.391915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.392203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.392229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.392493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.392859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.392887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.393226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.393568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.393595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.393993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.394336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.394362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.737 [2024-05-15 17:13:11.394599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.395003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.737 [2024-05-15 17:13:11.395030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.737 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.395385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.395842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.395938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.396398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.396756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.396786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.397134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.397480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.397507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.397985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.398351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.398378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.398748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.399109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.399136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.399508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.399869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.399896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.400295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.400658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.400685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.401054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.401309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.401336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.401696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.402060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.402086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.402451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.402801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.402829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.403093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.403459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.403484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.403853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.404213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.404239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.404642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.405018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.405044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.405483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.405882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.405910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.406319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.406707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.406736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.407111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.407474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.407500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.407866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.408138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.408169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.408559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.408946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.408973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.409359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.409718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.409747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.410143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.410484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.410510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.410885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.411126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.411154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.411521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.411881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.411909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.412258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.412639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.412668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.412989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.413359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.413386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.413791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.414020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.414048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.414347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.414596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.414623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.414994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.415364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.415390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.415595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.415944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.415970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.738 [2024-05-15 17:13:11.416228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.416601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.738 [2024-05-15 17:13:11.416629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.738 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.417014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.417369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.417395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.417772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.418141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.418168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.418559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.418934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.418962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.419330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.419699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.419726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.420104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.420498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.420524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.420912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.421277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.421302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.421661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.422021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.422047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.422422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.422786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.422813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.423056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.423451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.423477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.423843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.424211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.424238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.424638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.425029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.425056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.425424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.425782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.425809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.426179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.426430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.426456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.426803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.427148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.427175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.427525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.427788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.427818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.428173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.428526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.428564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.428843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.429228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.429255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.429592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.429973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.429999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.430241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.430644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.430672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.430941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.431371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.431398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.431697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.432140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.432166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.432609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.433002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.433027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.433444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.433849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.433877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.434323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.434668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.434697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.434948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.435209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.435243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.435628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.436031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.436059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.436427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.436804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.436830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.437206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.437385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.437410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.437793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.438039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.438064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.739 qpair failed and we were unable to recover it. 00:28:32.739 [2024-05-15 17:13:11.438416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.739 [2024-05-15 17:13:11.438738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.438765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.439152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.439525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.439560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.439830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.440114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.440140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.440521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.440865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.440892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.441083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.441482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.441508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.441903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.442028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.442059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.442434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.442790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.442817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.443240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.443620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.443647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.444089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.444434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.444460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.444859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.445237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.445265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.445622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.445992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.446018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.446387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.446731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.446758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.447151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.447490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.447517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.447875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.448226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.448252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.448632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.448995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.449022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.449448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.449816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.449849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.450201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.450532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.450582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.450855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.451109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.451134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.451426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.451771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.451805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.452155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.452561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.452589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.452981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.453392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.453417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.453775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.454009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.454036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.454400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.454777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.454804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.455204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.455568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.455596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.456019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.456394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.456423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.456847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.457211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.457243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.457599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.457983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.458009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.458311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.458562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.458589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.458948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.459327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.459352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.459709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.460110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.740 [2024-05-15 17:13:11.460137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.740 qpair failed and we were unable to recover it. 00:28:32.740 [2024-05-15 17:13:11.460503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.460871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.460900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.461271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.461646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.461674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.462056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.462402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.462429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.462791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.463144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.463171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.463555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.464001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.464027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.464415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.464760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.464787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.465179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.465416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.465443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.465692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.466081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.466108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.466493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.466846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.466873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.467233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.467612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.467640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.468037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.468444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.468470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.468852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.469264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.469290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.469656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.470096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.470122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.470476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.470860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.470889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.471272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.471677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.471704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.472094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.472442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.472469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.472881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.473277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.473304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.473667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.474014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.474040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.474483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.474820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.474855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.475017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.475376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.475401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.475788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.476154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.476180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.476565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.476911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.476937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.477301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.477672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.477700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.477962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.478357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.478383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.741 qpair failed and we were unable to recover it. 00:28:32.741 [2024-05-15 17:13:11.478742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.741 [2024-05-15 17:13:11.479125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.479151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.479542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.479848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.479874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.480239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.480591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.480617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.480894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.481132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.481158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.481533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.481838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.481864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.482247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.482651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.482680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.483067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.483491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.483518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.483799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.484126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.484153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.484609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.484990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.485016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.485285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.485667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.485695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.486076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.486473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.486499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.486887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.487256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.487282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.487670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.488104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.488131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.488378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.488603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.488632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.489028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.489417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.489442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.489825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.490195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.490222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.490483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.490867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.490895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.491254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.491614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.491642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.491993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.492240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.492266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.492503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.492888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.492914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.493286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.493631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.493658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.494047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.494412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.494439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.494863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.495250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.495275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.495679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.496075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.496101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.496367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.496738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.496765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.497158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.497565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.497592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.497973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.498336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.498362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.498792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.499134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.499161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.499457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.499821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.499849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.500255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.500624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.742 [2024-05-15 17:13:11.500651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.742 qpair failed and we were unable to recover it. 00:28:32.742 [2024-05-15 17:13:11.501030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.501400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.501426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.501698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.502156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.502182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.502579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.502948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.502974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.503289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.503700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.503728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.504122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.504498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.504525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.504977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.505345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.505374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.505743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.506103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.506128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.506371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.506739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.506767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.507134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.507491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.507517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.507915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.508315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.508342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.508687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.509053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.509079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.509460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.509840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.509866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.510282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.510656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.510684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.510939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.511317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.511343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.511744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.512122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.512149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.512528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.512928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.512955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.513250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.513646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.513674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.514032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.514452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.514479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.514850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.515216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.515241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.515629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.516006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.516032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.516418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.516659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.516686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.517069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.517428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.517453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.517646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.518087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.518112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.518500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.518892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.518920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.519310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.519681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.519708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.520079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.520457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.520483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.520887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.521249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.521276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.521645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.521917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.521946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.522318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.522730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.522758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.743 qpair failed and we were unable to recover it. 00:28:32.743 [2024-05-15 17:13:11.523084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.743 [2024-05-15 17:13:11.523489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.523517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.523965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.524327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.524354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.524713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.525079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.525106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.525502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.525940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.525969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.526223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.526588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.526617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.526987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.527331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.527356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.527715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.528133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.528159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.528556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.528940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.528966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.529355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.529787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.529813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.530202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.530443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.530472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.530847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.531211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.531236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.531605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.532016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.532042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.532419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.532769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.532797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.533193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.533598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.533625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.533992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.534334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.534360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.534765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.535151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.535179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.535569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.535923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.535948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.536324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.536699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.536727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.537106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.537491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.537518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.537989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.538341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.538366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.538705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.539082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.539108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.539492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.539972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.539999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.540379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.540751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.540778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.541179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.541562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.541591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.542019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.542276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.542301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.542666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.543025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.543050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.543479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.543860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.543888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.544317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.544688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.544715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.545142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.545419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.545445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.545831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.546203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.744 [2024-05-15 17:13:11.546232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.744 qpair failed and we were unable to recover it. 00:28:32.744 [2024-05-15 17:13:11.546630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.546978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.547005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.547393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.547750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.547777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.548164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.548534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.548582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.548943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.549326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.549351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.549741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.550051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.550076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.550454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.550881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.550910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.551300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.551679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.551706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.552072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.552314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.552340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.552732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.553109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.553136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.553555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.553825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.553850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.554126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.554581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.554610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.554975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.555348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.555376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.555756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.556205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.745 [2024-05-15 17:13:11.556230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:32.745 qpair failed and we were unable to recover it. 00:28:32.745 [2024-05-15 17:13:11.556631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.557047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.557081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.019 qpair failed and we were unable to recover it. 00:28:33.019 [2024-05-15 17:13:11.557497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.557898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.557925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.019 qpair failed and we were unable to recover it. 00:28:33.019 [2024-05-15 17:13:11.558325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.558816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.558845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.019 qpair failed and we were unable to recover it. 00:28:33.019 [2024-05-15 17:13:11.559115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.559480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.559509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.019 qpair failed and we were unable to recover it. 00:28:33.019 [2024-05-15 17:13:11.559895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.560273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.560300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.019 qpair failed and we were unable to recover it. 00:28:33.019 [2024-05-15 17:13:11.560670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.560908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.560936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.019 qpair failed and we were unable to recover it. 00:28:33.019 [2024-05-15 17:13:11.561312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.561562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.561591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.019 qpair failed and we were unable to recover it. 00:28:33.019 [2024-05-15 17:13:11.561957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.562325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.562354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.019 qpair failed and we were unable to recover it. 00:28:33.019 [2024-05-15 17:13:11.562751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.563086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.563113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.019 qpair failed and we were unable to recover it. 00:28:33.019 [2024-05-15 17:13:11.563501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.563817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.019 [2024-05-15 17:13:11.563844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.019 qpair failed and we were unable to recover it. 00:28:33.019 [2024-05-15 17:13:11.564220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.564433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.564466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.564814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.565188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.565214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.565573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.565941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.565967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.566349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.566733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.566761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.567141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.567510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.567535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.568005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.568362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.568387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.568836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.569185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.569211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.569625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.570024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.570050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.570435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.570793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.570821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.571201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.571596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.571623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.572003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.572370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.572401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.572787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.573188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.573214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.573480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.573852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.573879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.574232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.574641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.574670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.574950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.575350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.575377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.575729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.576100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.576127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.576507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.576842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.576870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.577230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.577671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.577699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.578047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.578423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.578449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.578806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.579168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.579195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.579572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.579946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.579979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.580341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.580712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.580740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.581023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.581412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.581438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.581842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.582186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.582214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.582590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.582975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.583001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.583448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.583853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.583879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.584250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.584615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.584643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.585022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.585256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.585284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.585728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.586109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.586135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.020 qpair failed and we were unable to recover it. 00:28:33.020 [2024-05-15 17:13:11.586516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.020 [2024-05-15 17:13:11.586907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.586936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.587307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.587667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.587696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.588059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.588421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.588447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.588823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.589214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.589242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.589618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.589983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.590009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.590456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.590852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.590879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.591306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.591688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.591717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.592091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.592463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.592488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.592850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.593260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.593288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.593683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.594054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.594079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.594466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.594766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.594793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.595157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.595532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.595583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.595985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.596355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.596381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.596779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.597172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.597198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.597567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.597810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.597838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.598219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.598572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.598599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.598848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.599219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.599245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.599624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.599998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.600024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.600406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.600764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.600792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.601174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.601557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.601585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.601945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.602319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.602346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.602787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.603149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.603174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.603584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.603962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.603990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.604365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.604739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.604767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.605148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.605619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.605646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.605969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.606342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.606368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.606773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.607161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.607189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.607570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.607942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.607967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.608355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.608701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.608729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.609150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.609433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.021 [2024-05-15 17:13:11.609459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.021 qpair failed and we were unable to recover it. 00:28:33.021 [2024-05-15 17:13:11.609855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.610231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.610257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.610636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.611032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.611058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.611449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.611834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.611861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.612219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.612595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.612622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.613010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.613381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.613408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.613785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.614164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.614190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.614626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.614993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.615018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.615379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.615788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.615815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.616195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.616594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.616622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.616999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.617387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.617413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.617782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.618183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.618209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.618593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.618993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.619020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.619405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.619754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.619781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.620037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.620483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.620510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.620781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.621029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.621054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.621454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.621807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.621836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.622219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.622502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.622528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.623002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.623366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.623391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.623800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.624177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.624203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.624594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.624990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.625016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.625439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.625816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.625844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.626262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.626614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.626642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.627033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.627336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.627364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.627736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.628115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.628142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.628528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.628906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.628932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.629216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.629581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.629610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.629901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.630281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.630307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.630683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.631053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.631080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.631474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.631842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.631869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.022 [2024-05-15 17:13:11.632238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.632613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.022 [2024-05-15 17:13:11.632640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.022 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.632990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.633324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.633349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.633732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.634120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.634147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.634541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.634799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.634829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.635209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.635567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.635595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.635949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.636325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.636351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.636762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.637021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.637046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.637431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.637796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.637825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.638272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.638626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.638653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.639036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.639406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.639432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.639847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.640218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.640244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.640608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.641014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.641039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.641422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.641795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.641822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.642214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.642580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.642609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.643010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.643404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.643430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.643812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.644073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.644099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.644493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.644848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.644876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.645231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.645609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.645636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.646046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.646299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.646330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.646692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.646996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.647023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.647405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.647761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.647789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.648165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.648518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.648555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.648937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.649311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.649337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.649707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.650115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.650143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.650563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.650950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.650977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.651433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.651828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.651856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.652235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.652609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.652637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.653038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.653440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.653467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.653828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.654193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.654218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.023 qpair failed and we were unable to recover it. 00:28:33.023 [2024-05-15 17:13:11.654601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.654972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.023 [2024-05-15 17:13:11.654999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.655388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.655756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.655783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.656074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.656436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.656462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.656828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.657201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.657229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.657608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.657857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.657885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.658284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.658640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.658667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.659046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.659433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.659459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.659732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.660104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.660130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.660397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.660643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.660673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.661071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.661450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.661477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.661768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.662126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.662154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.662428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.662776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.662804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.663191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.663589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.663616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.664018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.664389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.664416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.664806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.665210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.665236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.665618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.666019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.666045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.666437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.666807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.666835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.667197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.667591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.667626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.667990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.668342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.668367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.668782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.669175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.669202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.669593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.669985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.670011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.670370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.670742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.670769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.671144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.671542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.671580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.671970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.672342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.672367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.672739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.673140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.673172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.024 qpair failed and we were unable to recover it. 00:28:33.024 [2024-05-15 17:13:11.673578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.024 [2024-05-15 17:13:11.673953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.673979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.674353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.674730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.674757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.675121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.675489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.675516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.675961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.676425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.676451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.676858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.677227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.677253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.677650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.678024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.678050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.678458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.678833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.678860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.679241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.679483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.679512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.679915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.680161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.680191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.680559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.680927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.680960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.681218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.681598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.681625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.682019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.682407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.682435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.682805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.683042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.683068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.683394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.683763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.683790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.684151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.684524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.684563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.684926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.685325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.685351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.685624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.685999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.686025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.686222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.686498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.686528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.686915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.687285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.687311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.687588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.687955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.687986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.688365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.688742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.688770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.689143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.689437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.689464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.689824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.690194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.690220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.690599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.690853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.690882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.691302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.691693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.691721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.692146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.692503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.692529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.692910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.693282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.693307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.693671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.693919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.693946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.694331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.694738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.694765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.025 [2024-05-15 17:13:11.695122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.695373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.025 [2024-05-15 17:13:11.695405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.025 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.695656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.696016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.696042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.696420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.696775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.696802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.697105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.697405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.697431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.697703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.698091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.698117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.698480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.698830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.698857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.699260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.699583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.699609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.699882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.700078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.700104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.700487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.700897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.700925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.701332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.701695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.701723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.702091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.702438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.702466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.702918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.703272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.703299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.703668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.703915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.703944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.704206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.704662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.704690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.704934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.705288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.705316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.705637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.706052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.706078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.706484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.706858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.706885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.707265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.707573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.707601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.707958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.708335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.708360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.708602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.708865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.708893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.709300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.709574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.709602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.709991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.710242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.710271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.710579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.710953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.710979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.711231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.711600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.711627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.712080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.712460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.712487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.712731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.712958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.712987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.713391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.713610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.713636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.714045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.714294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.714319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.714686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.715058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.715083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.715327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.715730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.715757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.026 qpair failed and we were unable to recover it. 00:28:33.026 [2024-05-15 17:13:11.716130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.026 [2024-05-15 17:13:11.716504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.716532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.716730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.717025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.717051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.717415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.717839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.717866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.718251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.718629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.718658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.719035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.719418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.719444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.719813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.720191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.720216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.720487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.720865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.720893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.721279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.721665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.721692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.722073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.722443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.722468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.722877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.723230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.723258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.723527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.723939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.723966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.724348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.724605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.724637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.724929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.725312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.725339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.725743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.726124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.726150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.726527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.726780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.726812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.727218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.727458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.727484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.727868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.728298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.728324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.728728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.729113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.729139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.729567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.729931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.729958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.730409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.730816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.730843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.731112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.731477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.731503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.731907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.732316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.732344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.732736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.733102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.733128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.733502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.733862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.733890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.734281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.734645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.734674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.734962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.735330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.735356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.735714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.736101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.736127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.736474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.736854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.736882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.737311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.737653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.737679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.738052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.738439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.027 [2024-05-15 17:13:11.738466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.027 qpair failed and we were unable to recover it. 00:28:33.027 [2024-05-15 17:13:11.738822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.739064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.739090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.739471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.739886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.739913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.740233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.740632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.740661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.741047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.741420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.741446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.741885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.742263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.742289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.742648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.743028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.743054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.743352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.743734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.743761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.744160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.744579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.744606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.744966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.745279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.745304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.745695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.746055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.746081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.746453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.746864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.746891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.747330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.747717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.747744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.748072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.748339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.748367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.748766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.749141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.749167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.749542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.749992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.750018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.750288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.750645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.750673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.751070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.751434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.751461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.751851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.752221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.752246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.752627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.753009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.753037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.753293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.753640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.753668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.754061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.754414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.754440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.754917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.755184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.755212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.755617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.756012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.756040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.756429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.756810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.756839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.028 qpair failed and we were unable to recover it. 00:28:33.028 [2024-05-15 17:13:11.757236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.028 [2024-05-15 17:13:11.757624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.757653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.758040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.758298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.758327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.758714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.759119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.759145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.759563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.759929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.759955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.760368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.760756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.760783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.761166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.761582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.761610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.761860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.762247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.762273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.762631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.763036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.763062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.763439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.763815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.763842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.764214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.764601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.764628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.764989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.765343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.765368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.765765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.766147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.766173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.766555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.766994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.767020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.767382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.767644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.767674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.768114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.768499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.768524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.768966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.769313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.769339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.769605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.770019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.770046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.770508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.770783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.770813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.771181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.771533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.771569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.771957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.772341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.772367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.772776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.773158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.773185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.773573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.773947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.773974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.774319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.774712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.774741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.775137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.775403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.775429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.775839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.776224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.776250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.776633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.777044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.777070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.777368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.777731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.777757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.778129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.778523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.778559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.778945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.779325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.779351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.029 qpair failed and we were unable to recover it. 00:28:33.029 [2024-05-15 17:13:11.779737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.029 [2024-05-15 17:13:11.780156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.780184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.780568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.780818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.780844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.781223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.781634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.781662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.782034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.782404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.782430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.782695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.783072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.783099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.783481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.783855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.783882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.784269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.784625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.784652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.785048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.785408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.785435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.785800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.786201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.786227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.786605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.786953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.786980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.787345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.787738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.787765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.788080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.788445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.788472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.788887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.789249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.789276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.789655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.790039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.790066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.790478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.790882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.790910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.791359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.791725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.791753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.792157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.792516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.792577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.792948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.793293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.793319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.793692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.794086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.794121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.794410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.794637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.794665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.795021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.795290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.795316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.795686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.796081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.796109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.796507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.796799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.796826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.797208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.797586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.797613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.798075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.798450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.798479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.798863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.799215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.799243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.799618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.800113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.800139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.800537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.800929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.800955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.801347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.801759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.801794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.802156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.802521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.030 [2024-05-15 17:13:11.802558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.030 qpair failed and we were unable to recover it. 00:28:33.030 [2024-05-15 17:13:11.802977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.803331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.803356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.803781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.804150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.804177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.804441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.804803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.804830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.805226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.805582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.805609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.806056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.806416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.806443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.806778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.807158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.807184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.807580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.807954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.807981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.808358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.808711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.808738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.809122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.809491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.809522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.809930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.810300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.810326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.810767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.811120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.811146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.811408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.811798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.811825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.812198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.812570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.812597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.812992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.813346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.813372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.813723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.814099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.814124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.814509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.814912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.814939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.815345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.815722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.815751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.816150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.816502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.816528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.816929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.817304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.817335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.817709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.818100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.818127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.818515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.818874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.818901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.819288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.819661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.819688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.820048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.820423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.820448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.820889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.821248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.821273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.821658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.822037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.822062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.822444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.822818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.822846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.823242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.823639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.823667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.824040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.824404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.824430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.824814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.825204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.825230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.031 qpair failed and we were unable to recover it. 00:28:33.031 [2024-05-15 17:13:11.825610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.826040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.031 [2024-05-15 17:13:11.826065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.826404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.826648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.826679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.827055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.827416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.827442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.827818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.828166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.828192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.828554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.828815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.828840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.829240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.829652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.829679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.830025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.830381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.830406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.830784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.831053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.831078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.831489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.831881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.831910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.832283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.832651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.832677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.833070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.833450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.833476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.833824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.834178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.834203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.834583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.834829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.834858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.835258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.835639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.835668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.836064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.836468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.836495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.836877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.837246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.837273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.837683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.838038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.838065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.838450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.838777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.838804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.839202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.839566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.839593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.839977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.840324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.840350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.032 [2024-05-15 17:13:11.840727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.841111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.032 [2024-05-15 17:13:11.841137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.032 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.841535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.841838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.841864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.842254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.842675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.842704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.843118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.843466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.843491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.843863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.844267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.844293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.844649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.845023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.845049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.845191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.845611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.845638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.846009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.846385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.846412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.846652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.847035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.847062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.847301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.847629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.847656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.848056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.848332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.848357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.848718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.849084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.849110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.849452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.849888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.849915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.850273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.850743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.850771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.851137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.851559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.851586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.852005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.852257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.852282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.852686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.853049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.853075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.853462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.853900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.853927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.854364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.854721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.854748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.855144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.855518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.855544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.855823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.856182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.856208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.856592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.856975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.857001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.857385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.857841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.857867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.858241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.858620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.858648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.859050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.859398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.859424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-05-15 17:13:11.859720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-05-15 17:13:11.860089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.860115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.860504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.860871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.860899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.861289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.861561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.861588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.861980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.862367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.862393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.862758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.863094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.863120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.863508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.863867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.863894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.864275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.864657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.864684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.865056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.865444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.865470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.865922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.866268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.866294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.866676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.867085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.867111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.867518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.867910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.867938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.868333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.868706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.868733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.869117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.869496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.869522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.869899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.870303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.870328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.870591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.870941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.870969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.871330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.871724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.871752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.872134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.872445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.872471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.872845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.873205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.873231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.873642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.874082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.874108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.874506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.874900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.874927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.875316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.875693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.875721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.876091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.876439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.876465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.876826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.877155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.877181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.877563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.878016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.878042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.878394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.878772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.878800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.879172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.879584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.879611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.879993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.880401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.880426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.880817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.881231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.881257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.881580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.881940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.881965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.882355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.882715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-05-15 17:13:11.882742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-05-15 17:13:11.883142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.883514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.883541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.883934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.884288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.884314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.884566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.884950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.884976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.885316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.885662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.885689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.886044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.886417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.886444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.886715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.887094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.887122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.887498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.887846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.887874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.888248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.888601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.888627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.889038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.889408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.889435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.889844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.890242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.890268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.890670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.891081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.891106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.891488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.891884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.891911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.892299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.892686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.892713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.893117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.893471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.893496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.893958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.894345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.894371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.894744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.895122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.895148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.895517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.895905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.895933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.896302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.896694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.896721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.897109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.897468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.897494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.897842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.898135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.898161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.898567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.898932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.898958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.899296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.899664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.899691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.900074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.900455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.900481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.900856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.901187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.901212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.901574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.901963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.901989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.902358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.902641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.902674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.903074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.903464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.903491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.903880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.904228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.904255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.904649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.905027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.905052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-05-15 17:13:11.905407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.905760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-05-15 17:13:11.905787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.906171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.906579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.906607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.906970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.907312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.907338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.907723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.908108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.908135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.908527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.908901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.908928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.909301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.909649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.909676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.910084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.910481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.910507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.910947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.911346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.911373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.911764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.912120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.912146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.912520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.912992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.913021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.913388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.913766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.913794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.914201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.914598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.914626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.915000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.915409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.915435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.915823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.916194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.916220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.916613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.917006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.917032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.917372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.917725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.917752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.918135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.918508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.918541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.918825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.919118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.919145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.919527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.919945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.919972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.920280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.920532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.920572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.920980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.921362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.921389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.921765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.922135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.922160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.922577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.923000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.923028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.923423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.923816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.923844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.924243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.924597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.924624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.924966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.925392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.925417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.925861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.926151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.926184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.926554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.926909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.926935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.927326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.927712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.927739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.928061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.928436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.928461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-05-15 17:13:11.928819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-05-15 17:13:11.929225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.929250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.929642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.930021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.930046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.930426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.930809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.930836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.931230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.931607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.931635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.932017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.932396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.932423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.932817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.933187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.933213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.933606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.933990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.934023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.934391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.934752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.934780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.935164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.935453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.935480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.935849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.936109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.936135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.936510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.936885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.936915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.937291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.937455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.937482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.937876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.938285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.938312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.938715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.938958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.938985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.939370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.939627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.939654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.940091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.940467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.940494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.940762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.941179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.941212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.941606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.942026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.942053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.942484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.942856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.942883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.943310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.943556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.943583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.943966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.944324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.944350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.944732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.945135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.945161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.945584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.945842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.945868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.946174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.946507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-05-15 17:13:11.946533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-05-15 17:13:11.947007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.947379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.947405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.947655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.948045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.948073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.948350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.948745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.948772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.949173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.949557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.949585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.949849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.950201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.950228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.950569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.950820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.950845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.951253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.951538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.951575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.951946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.952175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.952199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.952589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.953014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.953040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.953406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.953799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.953826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.954234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.954607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.954635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.955015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.955376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.955401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.955803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.956054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.956079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.956449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.956834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.956861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.957230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.957601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.957629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.957998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.958270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.958295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.958644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.959030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.959055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.959482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.959854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.959881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.960269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.960511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.960538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.960823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.961195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.961221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.961501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.961884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.961911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.962165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.962535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.962580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.962996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.963370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.963396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.963798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.964057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.964083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.964468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.964703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.964730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.965137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.965513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.965539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.965789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.966152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.966179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.966469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.966842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.966869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.967238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.967666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.967694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-05-15 17:13:11.968074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.968439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-05-15 17:13:11.968467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.968813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.969171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.969198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.969576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.969951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.969978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.970353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.970727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.970755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.970999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.971408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.971434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.971850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.972249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.972275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.972581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.972957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.972983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.973221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.973597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.973624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.974057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.974435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.974462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.974821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.975096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.975125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.975513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.975899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.975925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.976308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.976686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.976714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.977118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.977412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.977438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.977853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.978238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.978264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.978640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.979012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.979037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.979367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.979658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.979686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.980073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.980447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.980473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.980832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.981192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.981219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.981452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.981800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.981828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.982213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.982611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.982637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.982911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.983274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.983300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.983689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.984046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.984073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.984453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.984840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.984869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.985262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.985605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.985632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.986050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.986437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.986463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.986733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.987134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.987160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.987553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.987984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.988010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.988327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.988708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.988736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.989130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.989506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.989532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-05-15 17:13:11.989937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.990315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-05-15 17:13:11.990342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.990708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.991064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.991091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.991526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.991970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.991997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.992366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.992740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.992768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.993131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.993510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.993536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.993978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.994360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.994386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.994745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.995146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.995172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.995499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.995863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.995890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.996251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.996611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.996639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.997020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.997350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.997376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.997738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.997975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.998003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.998354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.998737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.998765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.999175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.999575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:11.999602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:11.999978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.000350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.000377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.000744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.001138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.001166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.001559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.001952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.001979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.002427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.002899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.002927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.003167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.003538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.003577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.003985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.004245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.004270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.004632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.004996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.005023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.005267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.005637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.005665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.006052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.006429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.006456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.006835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.007064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.007092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.007455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.007877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.007905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.008292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.008649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.008677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.009052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.009431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.009458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.009896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.010246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.010273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.010681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.011076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.011102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.011472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.011882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.011910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.012361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.012744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.012772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-05-15 17:13:12.013155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-05-15 17:13:12.013527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.013562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.013877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.014238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.014264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.014666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.015030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.015057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.015414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.015798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.015825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.016218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.016600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.016629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.017034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.017393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.017420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.017798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.018161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.018187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.018562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.018910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.018937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.019325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.019738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.019765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.020161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.020557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.020585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.020962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.021357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.021383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.021764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.022148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.022175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.022447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.022713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.022744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.023117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.023470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.023497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.023861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.024212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.024239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.024628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.025032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.025059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.025503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.025855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.025883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.026305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.026646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.026674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.027054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.027407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.027433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.027806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.028186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.028214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.028573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.028811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.028839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.029223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.029580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.029607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.030064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.030435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.030461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.030840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.031210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.031236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.031565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.031840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.031869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.032235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.032607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.032641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.033046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.033417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.033443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.033699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.034058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.034085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.034465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.034837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.034866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-05-15 17:13:12.035254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.035637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-05-15 17:13:12.035665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.036058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.036436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.036463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.036846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.037199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.037225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.037617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.038038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.038064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.038449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.038789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.038818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.039216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.039602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.039630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.040010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.040290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.040322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.040692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.040948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.040974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.041358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.041821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.041848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.042227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.042602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.042630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.043032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.043384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.043410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.043788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.044090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.044116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.044479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.044891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.044918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.045287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.045644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.045672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.046058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.046427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.046453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.046822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.047193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.047220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.047603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.047999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.048031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.048419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.048798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.048826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.049204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.049598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.049625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.050048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.050444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.050471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.050766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.051140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.051167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.051536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.051928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.051954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.052334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.052709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.052737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.052999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.053366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.053393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.053752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.054006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.054031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-05-15 17:13:12.054414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.054798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-05-15 17:13:12.054827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.055197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.055573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.055621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.056012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.056256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.056286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.056676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.057102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.057129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.057484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.057854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.057889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.058245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.058603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.058631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.059025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.059265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.059294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.059663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.060050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.060077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.060447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.060854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.060881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.061269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.061641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.061669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.062071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.062390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.062416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.062800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.063153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.063180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.063552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.063919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.063945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.064327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.064647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.064673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.065077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.065429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.065454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.065837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.066167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.066193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.066583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.066834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.066864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.067312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.067683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.067711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.068109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.068476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.068502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.068933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.069319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.069346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.069625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.070078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.070104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.070480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.070822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.070849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.071217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.071599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.071628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.071977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.072385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.072411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.072843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.073235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.073262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.073627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.073997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.074023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.074414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.074826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.074853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.075253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.075631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.075660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.076018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.076368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.076395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.076785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.077139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.077165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-05-15 17:13:12.077539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-05-15 17:13:12.077919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.077945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.078321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.078682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.078710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.079109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.079475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.079501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.079927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.080304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.080331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.080711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.081066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.081092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.081437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.081838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.081865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.082223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.082594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.082621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.083069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.083441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.083468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.083845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.084100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.084128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.084517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.084895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.084923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.085293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.085646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.085674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.086114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.086462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.086489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.086947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.087320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.087347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.087705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.088074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.088102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.088487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.088857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.088884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.089136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.089498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.089524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.089935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.090345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.090371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.090776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.091140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.091166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.091534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.091931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.091957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.092335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.092713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.092740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.093117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.093492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.093519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.093798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.094153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.094180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.094458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.094845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.094875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.095247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.095621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.095648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.096032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.096408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.096435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.096793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.097134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.097160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.097556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.097894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.097920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.098318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.098698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.098726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.099107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.099477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.099503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.099878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.100249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.100277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-05-15 17:13:12.100675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-05-15 17:13:12.101030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.101056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.101428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.101802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.101829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.102214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.102601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.102629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.103020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.103317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.103344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.103727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.104090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.104116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.104484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.104715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.104745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.105039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.105425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.105452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.105815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.106184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.106210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.106580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.106957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.106983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.107414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.107842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.107869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.108252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.108658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.108685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.109041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.109288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.109317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.109729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.110082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.110108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.110485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.110851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.110879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.111256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.111587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.111630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.112033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.112437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.112462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.112914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.113279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.113306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.113670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.114055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.114081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.114463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.114802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.114830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.115188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.115541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.115576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.115966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.116339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.116365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.116749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.117112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.117139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.117519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.117873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.117900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.118271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.118633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.118660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.119038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.119340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.119365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.119729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.120079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.120104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.120481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.120827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.120854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.121132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.121543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.121582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.121971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.122393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.122419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-05-15 17:13:12.122804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.123179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-05-15 17:13:12.123206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.315 [2024-05-15 17:13:12.123571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.123851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.123877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-05-15 17:13:12.124279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.124713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.124741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-05-15 17:13:12.125134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.125528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.125565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-05-15 17:13:12.125937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.126307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.126333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-05-15 17:13:12.126601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.126893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.126920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-05-15 17:13:12.127288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.127648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-05-15 17:13:12.127676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.128081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.128445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.128472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.128857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.129220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.129247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.129631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.129992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.130018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.130271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.130646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.130675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.131042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.131441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.131468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.131847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.132229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.132256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.132645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.132900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.132929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.133218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.133631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.133659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.134044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.134438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.134465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.134823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.135066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.135096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.135463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.135861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.135888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.136282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.136691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.584 [2024-05-15 17:13:12.136718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.584 qpair failed and we were unable to recover it. 00:28:33.584 [2024-05-15 17:13:12.137115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.137512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.137538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.137921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.138343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.138370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.138723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.139102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.139129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.139523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.139913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.139940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.140305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.140676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.140703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.141083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.141452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.141479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.141887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.142244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.142269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.142659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.142997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.143024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.143413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.143810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.143837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.144199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.144574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.144601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.144949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.145337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.145363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.145704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.146094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.146120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.146495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.146891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.146919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.147293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.147663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.147690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.148014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.148407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.148434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.148834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.149170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.149195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.149579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.149943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.149969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.150301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.150650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.150677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.151098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.151525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.151560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.151926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.152280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.152306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.152712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.153105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.153132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.153504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.153890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.153917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.154307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.154694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.154722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.155098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.155446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.155472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.155724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.156106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.156138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.156498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.156842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.156871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.157247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.157624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.157652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.157954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.158319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.158345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.585 [2024-05-15 17:13:12.158725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.159156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.585 [2024-05-15 17:13:12.159183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.585 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.159453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.159824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.159852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.160098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.160436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.160462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.160810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.161176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.161202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.161568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.161824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.161852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.162229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.162476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.162504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.162900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.163327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.163359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.163728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.164103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.164129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.164505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.164750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.164781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.165173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.165553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.165582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.166010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.166364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.166390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.166807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.167070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.167096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.167378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.167753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.167783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.168161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.168528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.168562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.168739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.169084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.169110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.169388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.169756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.169784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.170153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.170526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.170567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.170836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.171233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.171259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.171631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.172002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.172029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.172415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.172785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.172812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.173182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.173573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.173601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.173911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.174284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.174310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.174677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.175050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.175076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.175449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.175781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.175817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.176168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.176524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.176558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.176936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.177335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.177362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.177746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.177986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.178021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.178393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.178746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.178773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.179168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.179542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.179592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.179981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.180341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.180367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.586 qpair failed and we were unable to recover it. 00:28:33.586 [2024-05-15 17:13:12.180725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.586 [2024-05-15 17:13:12.181139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.181165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.181380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.181756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.181784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.182184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.182556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.182585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.183024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.183258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.183284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.183651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.183902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.183931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.184226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.184593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.184621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.184995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.185374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.185400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.185778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.186028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.186054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.186418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.186680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.186706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.187101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.187485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.187511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.187935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.188403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.188430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.188808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.189166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.189191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.189574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.189847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.189873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.190252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.190538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.190573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.190981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.191263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.191291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.191649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.191912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.191937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.192216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.192609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.192638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.192938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.193207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.193232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.193625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.194063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.194089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.194480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.194887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.194914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.195373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.195689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.195716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.196077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.196329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.196356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.196602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.196990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.197025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.197435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.197815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.197842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.198220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.198601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.198627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.199156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.199513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.199538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.199812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.200220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.200246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.200613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.201012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.201038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.201307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.201728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.201756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.587 [2024-05-15 17:13:12.202158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.202565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.587 [2024-05-15 17:13:12.202592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.587 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.202868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.203246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.203272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.203657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.204013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.204039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.204421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.204796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.204824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.205216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.205589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.205616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.205881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.206226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.206252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.206569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.206960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.206986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.207293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.207525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.207574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.207893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.208266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.208292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.208642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.209012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.209038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.209400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.209770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.209798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.210185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.210594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.210621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.210873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.211250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.211277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.211673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.212036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.212063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.212333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.212688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.212716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.213065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.213459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.213484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.213734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.214128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.214154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.214385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.214729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.214756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.215139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.215541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.215579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.215895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.216268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.216294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.216666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.217041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.217067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.217309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.217690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.217717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.218092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.218352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.218378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.218633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.218934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.218959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.219344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.219601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.219631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.220031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.220399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.220425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.220805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.221151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.221178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.221460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.221720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.221747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.588 qpair failed and we were unable to recover it. 00:28:33.588 [2024-05-15 17:13:12.222135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.588 [2024-05-15 17:13:12.222540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.222576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.222967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.223354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.223380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.223760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.224043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.224069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.224434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.224802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.224829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.225252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.225492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.225518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.225989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.226361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.226388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.226749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.227011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.227036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.227405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.227782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.227809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.228041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.228431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.228457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.228890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.229268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.229295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.229672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.230030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.230056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.230331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.230654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.230680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.231052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.231446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.231472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.231903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.232297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.232324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.232704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.233083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.233109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.233475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.233848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.233875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.234252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.234629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.234656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.235038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.235363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.235389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.235684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.236059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.236086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.236472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.236859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.236887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.237267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.237642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.237669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.238068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.238471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.238497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.238862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.239237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.239263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.239647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.239997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.240022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.240411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.240755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.240783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.241049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.241410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.241436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.589 qpair failed and we were unable to recover it. 00:28:33.589 [2024-05-15 17:13:12.241780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.589 [2024-05-15 17:13:12.242150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.242176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.242542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.242903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.242930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.243331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.243612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.243638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.244009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.244260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.244286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.244610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.244910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.244937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.245289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.245606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.245633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.245984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.246344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.246370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.246766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.247142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.247169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.247556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.247871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.247896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.248262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.248616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.248644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.249078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.249480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.249506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.249955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.250322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.250348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.250589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.250980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.251006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.251378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.251618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.251649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.252034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.252410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.252438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.252715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.253038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.253065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.253330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.253593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.253620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.253988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.254229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.254259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.254634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.255035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.255061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.255337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.255623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.255651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.256056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.256451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.256477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.256758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.257153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.257179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.257555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.257953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.257979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.258369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.258752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.258779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.259162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.259533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.259568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.259962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.260350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.260376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.260771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.261127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.261153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.261554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.261916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.261941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.262317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.262726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.262753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.590 [2024-05-15 17:13:12.263137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.263431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.590 [2024-05-15 17:13:12.263457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.590 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.263804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.264175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.264201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.264574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.264952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.264979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.265364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.265714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.265740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.266108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.266471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.266498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.266879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.267291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.267318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.267690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.268077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.268103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.268477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.268821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.268849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.269124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.269484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.269509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.269896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.270286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.270312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.270716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.271062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.271089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.271486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.271843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.271869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.272243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.272623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.272651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.273024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.273396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.273421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.273662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.274032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.274058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.274328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.274743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.274778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.275148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.275399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.275429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.275798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.276156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.276183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.276558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.276937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.276965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.277355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.277716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.277745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.278137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.278512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.278540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.278926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.279299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.279327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.279705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.280088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.280115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.280401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.280792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.280819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.281216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.281578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.281607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.281998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.282366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.282397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.282791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.283164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.283190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.283568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.283812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.283838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.284157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.284533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.284574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.284968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.285232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.285258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.591 qpair failed and we were unable to recover it. 00:28:33.591 [2024-05-15 17:13:12.285667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.286035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.591 [2024-05-15 17:13:12.286061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.286433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.286815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.286843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.287229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.287596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.287625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.287920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.288273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.288299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.288676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.289040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.289066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.289454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.289837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.289871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.290283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.290645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.290672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.291066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.291467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.291493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.291958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.292343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.292369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.292840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.293218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.293245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.293593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.293989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.294016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.294407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.294843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.294870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.295266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.295624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.295651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.296052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.296381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.296407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.296833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.297207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.297233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.297468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.297860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.297893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.298256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.298628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.298655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.299001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.299409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.299435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.299749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.300095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.300120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.300421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.300810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.300837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.301184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.301443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.301473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.301875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.302236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.302263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.302618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.302996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.303021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.303414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.303787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.303813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.304185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.304581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.304608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.304986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.305347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.305372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.305754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.306138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.306164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.306499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.306860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.306888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.307287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.307685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.307712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.307968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.308349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.308375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.592 qpair failed and we were unable to recover it. 00:28:33.592 [2024-05-15 17:13:12.308756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.592 [2024-05-15 17:13:12.309132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.309158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.309525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.309885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.309913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.310162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.310535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.310573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.310974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.311371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.311396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.311757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.312099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.312125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.312497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.312750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.312776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.313195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.313560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.313587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.313948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.314317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.314343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.314728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.315152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.315178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.315577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.315983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.316009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.316399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.316774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.316802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.317189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.317564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.317592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-05-15 17:13:12.317954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.318237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-05-15 17:13:12.318263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-05-15 17:13:12.318639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.318995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.319021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-05-15 17:13:12.319451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.319841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.319869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-05-15 17:13:12.320250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.320594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.320621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-05-15 17:13:12.321006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.321379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.321406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-05-15 17:13:12.321795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.322148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.322174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-05-15 17:13:12.322564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.322994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.323020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-05-15 17:13:12.323374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.323764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.323793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-05-15 17:13:12.324169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-05-15 17:13:12.324560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.324589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.325004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.325387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.325413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.325661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.326000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.326026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.326251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.326659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.326686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.326930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.327258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.327285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.327737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.328095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.328121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.328529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.328931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.328960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.329389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.329822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.329849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.330232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.330629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.330657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.331049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.331378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.331404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.331808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.332176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.332202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.332585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.332927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.332955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.333264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.333622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.333650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.334054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.334386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.334412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.334823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.335087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.335112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.335482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.335833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.335860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.336251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.336628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.336656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.337038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.337411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.337437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.337808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.338164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.338191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.338582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.338960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.338990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.339352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.339726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.339755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.340149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.340493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.340519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.340963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.341348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.341374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.341779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.342154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.342180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.342569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.342966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.342992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.343407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.343804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.343832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.344112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.344525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.344561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.344923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.345292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.345317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.345726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.346164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.346191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-05-15 17:13:12.346584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-05-15 17:13:12.346828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.346858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.347126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.347538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.347578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.348071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.348404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.348431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.348882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.349144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.349169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.349430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.349794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.349822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.350205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.350576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.350603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.350951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.351346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.351372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.351757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.352136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.352163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.352454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.352840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.352867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.353235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.353606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.353633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.354027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.354403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.354429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.354797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.355169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.355195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.355589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.355995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.356022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.356410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.356792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.356820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.357178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.357578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.357606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.358005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.358342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.358368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.358743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.359120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.359147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.359540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.359940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.359965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.360345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.360749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.360777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.361153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.361528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.361563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.361923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.362286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.362311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.362686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.363066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.363093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.363471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.363840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.363867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.364239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.364592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.364622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.365024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.365412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.365440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.365723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.366115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.366140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.366525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.366757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.366786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.367152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.367541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.367580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.367948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.368310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.368335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.368709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.369102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-05-15 17:13:12.369128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-05-15 17:13:12.369510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.369876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.369903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.370283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.370660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.370687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.371091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.371333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.371363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.371781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.372129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.372155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.372501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.372901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.372929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.373266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.373609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.373636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.374015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.374383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.374410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.374796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.375169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.375197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.375602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.375883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.375909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.376305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.376661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.376690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.377100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.377473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.377500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.377895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.378291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.378318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.378681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.379040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.379067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.379453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.379798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.379828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.380203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.380567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.380595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.380960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.381322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.381348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.381734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.382115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.382142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.382410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.382804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.382832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.383200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.383607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.383635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.384046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.384424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.384450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.384804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.385172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.385199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.385628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.385990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.386016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.386372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.386624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.386655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.387036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.387409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.387435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.387720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.388120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.388147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.388503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.388917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.388945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.389341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.389706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.389733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.390106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.390500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.390533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.390928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.391294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.391321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-05-15 17:13:12.391700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.392098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-05-15 17:13:12.392124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.392493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.392841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.392869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.393198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.393572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.393599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.394002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.394396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.394422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.394804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.395179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.395205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.395569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.395956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.395982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.396383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.396752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.396780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.397114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.397483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.397509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.397902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.398277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.398310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.398666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.399020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.399046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.399337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.399751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.399778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.400156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.400517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.400543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.400969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.401316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.401342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.401745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.402153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.402178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.402522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.402911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.402938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.403330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.403581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.403611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.403995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.404340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.404366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.404734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.405155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.405181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.405578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.405927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.405958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.406216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.406581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.406608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.406988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.407375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.407401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.407784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.408139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.408166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.408558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.408993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.409019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.409378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.409753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.409780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.410140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.410474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-05-15 17:13:12.410500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-05-15 17:13:12.410919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.411282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.411311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.868 qpair failed and we were unable to recover it. 00:28:33.868 [2024-05-15 17:13:12.411727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.412097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.412123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.868 qpair failed and we were unable to recover it. 00:28:33.868 [2024-05-15 17:13:12.412506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.412881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.412909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.868 qpair failed and we were unable to recover it. 00:28:33.868 [2024-05-15 17:13:12.413278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.413623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.413658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.868 qpair failed and we were unable to recover it. 00:28:33.868 [2024-05-15 17:13:12.414039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.414409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.414435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.868 qpair failed and we were unable to recover it. 00:28:33.868 [2024-05-15 17:13:12.414692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.414947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.414973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.868 qpair failed and we were unable to recover it. 00:28:33.868 [2024-05-15 17:13:12.415401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.415748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.415775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.868 qpair failed and we were unable to recover it. 00:28:33.868 [2024-05-15 17:13:12.416222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.416595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.868 [2024-05-15 17:13:12.416623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.868 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.417022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.417376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.417402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.417794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.418180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.418207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.418597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.419008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.419034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.419411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.419660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.419687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.420116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.420371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.420410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.420800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.421057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.421082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.421467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.421852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.421879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.422226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.422612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.422639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.423024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.423404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.423430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.423837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.424206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.424232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.424470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.424845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.424872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.425258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.425644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.425672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.425928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.426284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.426312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.426672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.427018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.427044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.427430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.427793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.427820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.428118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.428509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.428535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.428927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.429280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.429305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.429701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.430046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.430072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.430437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.430691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.430719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.431073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.431336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.431363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.431732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.432008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.432034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.432435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.432701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.432727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.432954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.433348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.433374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.433752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.434010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.434036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.434420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.434728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.434755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.435043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.435399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.435425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.435824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.436205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.436231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.436629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.436921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.436948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.437232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.437596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.437623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.869 qpair failed and we were unable to recover it. 00:28:33.869 [2024-05-15 17:13:12.437893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.869 [2024-05-15 17:13:12.438265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.438292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.438575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.438970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.438996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.439376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.439668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.439695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.440127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.440493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.440519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.440889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.441288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.441314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.441776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.442161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.442188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.442571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.442937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.442963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.443329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.443779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.443807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.444173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.444562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.444590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.444965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.445208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.445233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.445702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.446043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.446069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.446465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.446822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.446849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.447238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.447474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.447502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.447796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.448208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.448235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.448601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.448893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.448918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.449292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.449692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.449719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.450181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.450543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.450588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.450984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.451377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.451403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.451792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.452158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.452184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.452518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.452901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.452929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.453299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.453651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.453679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.454060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.454240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.454266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.454473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.454869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.454896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.455183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.455595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.455622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.870 qpair failed and we were unable to recover it. 00:28:33.870 [2024-05-15 17:13:12.455983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.870 [2024-05-15 17:13:12.456391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.456417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.456834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.457186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.457212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.457588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.457978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.458005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.458268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.458636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.458663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.458890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.459258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.459284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.459675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.459968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.459994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.460424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.460794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.460821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.461205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.461465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.461492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.461887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.462291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.462317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.462579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.462946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.462974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.463369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.463750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.463778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.464125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.464510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.464537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.464877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.465276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.465303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.465774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.466160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.466187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.466464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.466831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.466859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.467238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.467485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.467515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.467802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.468232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.468259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.468615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.469015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.469041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.469427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.469826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.469853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.470300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.470631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.470659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.471071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.471465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.471491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.471836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.472229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.472256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.472638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.473022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.473048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.473460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.473837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.473865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.474164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.474534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.474584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.474937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.475269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.475296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.475675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.476047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.476073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.476437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.476796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.476822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.871 qpair failed and we were unable to recover it. 00:28:33.871 [2024-05-15 17:13:12.477199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 17:13:12.477587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.477615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.477980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.478221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.478250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.478620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.478993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.479019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.479281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.479664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.479691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.480094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.480494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.480520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.480921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.481294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.481321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.481616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.481982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.482008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.482375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.482715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.482742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.483133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.483502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.483530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.483922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.484289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.484315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.484603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.484978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.485004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.485360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.485688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.485715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.486008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.486357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.486383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.486667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.487031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.487057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.487402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.487754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.487781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.488137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.488518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.488563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.488932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.489239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.489265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.489647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.490023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.490050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.490429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.490803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.490830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.491200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.491565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.491593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.491943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.492307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.492333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.492614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.493023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.493050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.493398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.493868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.493896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.494288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.494739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.494766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.495148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.495467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.495492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.495859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.496222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.496248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.496635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.497016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.497042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.872 [2024-05-15 17:13:12.497476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.497872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 17:13:12.497899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.872 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.498223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.498596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.498622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.499016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.499389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.499415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.499796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.500157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.500184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.500579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.500943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.500969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.501354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.501730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.501758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.502143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.502544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.502581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.502957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.503328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.503354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.503654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.504008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.504035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.504425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.504809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.504837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.505213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.505474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.505503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.505892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.506278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.506303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.506603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.506988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.507014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.507382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.507662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.507690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.508084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.508349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.508373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.508732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.508991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.509019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.509456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.509887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.509914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.510290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.510663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.510691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.873 qpair failed and we were unable to recover it. 00:28:33.873 [2024-05-15 17:13:12.511072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 17:13:12.511425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.511457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.511919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.512385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.512411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.512661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.513120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.513146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.513576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.513933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.513959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.514337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.514706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.514733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.515139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.515510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.515537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.515900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.516272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.516299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.516687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.517078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.517104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.517469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.517856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.517883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.518259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.518654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.518681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.519056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.519429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.519461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.519816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.520187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.520213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.520593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.520985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.521011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.521380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.521634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.521664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.522049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.522418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.522445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.522807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.523161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.523187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.523574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.523972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.523999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.524373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.524636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.524665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.525056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.525413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.525438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.525809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.526191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.526217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.526603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.526994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.527025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.527400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.527785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.527812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.528160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.528532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.528568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.528932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.529310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.529336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.529737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.530113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.530139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.530527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.530912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.530939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.531326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.531603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.531632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.532032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.532384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.532409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.532792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.533162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.533188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.874 qpair failed and we were unable to recover it. 00:28:33.874 [2024-05-15 17:13:12.533572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.533963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 17:13:12.533989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.534393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.534750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.534801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.535203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.535586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.535615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.536046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.536460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.536486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.536893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.537275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.537301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.537714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.538087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.538113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.538503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.538875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.538902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.539263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.539507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.539534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.539977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.540223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.540252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.540655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.541020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.541046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.541417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.541790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.541818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.542250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.542607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.542634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.542887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.543152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.543178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.543500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.543748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.543778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.544145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.544511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.544537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.544935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.545249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.545275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.545657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.546046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.546072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.546447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.546916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.546943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.547301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.547695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.547722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.548086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.548455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.548481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.548906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.549304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.549329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.549711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.549905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.549931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.550340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.550702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.550728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.875 qpair failed and we were unable to recover it. 00:28:33.875 [2024-05-15 17:13:12.551108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.875 [2024-05-15 17:13:12.551475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.551501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.551773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.552183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.552209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.552596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.552995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.553022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.553466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.553878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.553906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.554288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.554659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.554688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.555076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.555447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.555474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.555882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.556240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.556266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.556654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.557036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.557061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.557422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.557800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.557828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.558205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.558564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.558591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.558974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.559315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.559341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.559740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.560151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.560176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.560570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.560937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.560963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.561390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.561863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.561965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.562452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.562850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.562880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.563288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.563644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.563672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.564045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.564280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.564307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.564684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.564964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.564999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.565370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.565750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.565780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.566106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.566478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.566505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.566905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.567308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.567336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.567696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.568119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.568146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.568409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.568874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.568903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.569303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.569667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.569696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.570098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.570468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.570494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.570852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.571183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.571209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.571599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.571969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.571995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.572359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.572508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.572539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.572891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.573278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.573305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.876 qpair failed and we were unable to recover it. 00:28:33.876 [2024-05-15 17:13:12.573690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.876 [2024-05-15 17:13:12.574087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.574113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.574502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.574882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.574909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.575279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.575652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.575680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.576035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.576388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.576415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.576805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.577157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.577183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.577587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.577962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.577988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.578354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.578748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.578775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.579153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.579502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.579528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.579930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.580304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.580331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.580724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.581094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.581120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.581513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.581918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.581945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.582394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.582741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.582769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.583142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.583406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.583431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.583735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.584090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.584116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.584518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.584931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.584960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.585305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.585669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.585696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.586064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.586425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.586451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.586901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.587247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.587272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.587626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.587994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.588020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.588382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.588639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.588671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.588960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.589381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.589407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.589782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.590192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.590218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.590602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.590992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.591018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.591296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.591646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.591674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.592044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.592419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.592444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.592846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.593225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.593252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.593643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.594044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.594070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.594447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.594793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.594821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.595217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.595582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.877 [2024-05-15 17:13:12.595609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.877 qpair failed and we were unable to recover it. 00:28:33.877 [2024-05-15 17:13:12.596048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.596414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.596441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.596807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.597172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.597198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.597585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.598007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.598032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.598422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.598800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.598827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.599195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.599574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.599602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.599990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.600340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.600367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.600747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.601116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.601142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.601483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.601854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.601881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.602152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.602568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.602596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.602976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.603348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.603374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.603765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.604130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.604158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.604595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.604977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.605004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.605367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.605741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.605768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.606066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.606468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.606493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.606868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.607222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.607249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.607669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.608034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.608060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.608490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.608870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.608898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.609257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.609623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.609650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.609949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.610178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.610207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.610587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.611005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.611031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.611287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.611673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.611701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.612088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.612464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.612491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.612769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.613188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.613214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.613604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.613958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.613984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.878 qpair failed and we were unable to recover it. 00:28:33.878 [2024-05-15 17:13:12.614349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.878 [2024-05-15 17:13:12.614597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.614627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.615028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.615408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.615435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.615868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.616149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.616175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.616584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.616961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.616988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.617432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.617674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.617701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.618067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.618438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.618464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.618878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.619112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.619141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.619522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.619926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.619955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.620358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.620734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.620764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.621159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.621419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.621452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.621886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.622315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.622341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.622725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.623101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.623128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.623515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.623926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.623953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.624323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.624658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.624686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.625067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.625483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.625508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.625905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.626275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.626300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.626675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.627049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.627076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.627472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.627845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.627873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.628247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.628586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.628614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.628986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.629345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.629371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.629697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.630060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.630086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.630363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.630742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.630768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.631152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.631543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.631583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.631969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.632330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.632356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.632732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.633074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.633099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.633375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.633730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.633757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.634107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.634447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.634473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.634873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.635229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.635261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.635657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.636048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.636075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.879 [2024-05-15 17:13:12.636323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.636701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.879 [2024-05-15 17:13:12.636730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.879 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.637100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.637464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.637490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.637854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.638234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.638261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.638649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.639034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.639060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.639335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.639691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.639719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.639948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.640291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.640317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.640720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.640972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.641000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.641394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.641793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.641820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.642236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.642638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.642673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.643080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.643477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.643503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.643788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.644170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.644196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.644641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.645011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.645038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.645428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.645856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.645884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.646240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.646636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.646662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.647031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.647381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.647407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.647839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.648201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.648227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.648481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.648862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.648890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.649271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.649669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.649697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.650080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.650434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.650466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.650843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.651211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.651237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.651674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.652039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.652065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.652510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.652914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.652943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.653317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.653702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.653729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.654111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.654496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.654522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.654919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.655282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.655308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.655690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.656140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.656166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.656590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.656887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.656913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.657300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.657653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.657679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.658059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.658373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.658405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.658766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.659069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.880 [2024-05-15 17:13:12.659095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-05-15 17:13:12.659478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.659854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.659883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.660269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.660696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.660723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.661118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.661520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.661556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.661947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.662201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.662231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.662497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.662847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.662875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.663252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.663625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.663652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.664036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.664367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.664393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.664760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.665150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.665176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.665543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.665929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.665955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.666406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.666806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.666833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.667194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.667565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.667592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.667921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.668289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.668315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.668692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.668942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.668971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.669348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.669725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.669753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.670149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.670538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.670579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.670942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.671283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.671309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.671686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.672081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.672108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.672507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.672912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.672940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.673327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.673722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.673749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.674148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.674528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.674564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.674963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.675331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.675357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.675731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.676087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.676113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.676474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.676833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.676860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.677117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.677477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.677504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.677894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.678318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.678344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.678671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.679035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.679061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.679432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.679824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.679851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.680112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.680399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.680425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.680877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.681143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.681169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-05-15 17:13:12.681597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.682007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.881 [2024-05-15 17:13:12.682034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.682424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.682682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.682713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.683092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.683446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.683472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.683927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.684303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.684329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.684736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.685116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.685142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.685492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.685855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.685882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.686262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.686626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.686653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.687064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.687321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.687346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.687732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.688103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.688128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.688479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.688846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.688873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.689259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.689661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.689688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.690067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.690314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.690339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.690738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.691130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.691156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.691561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.691957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.691982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.692358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.692745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.692773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.692955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.693315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.693340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.693709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.694057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.694082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-05-15 17:13:12.694342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.694735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-05-15 17:13:12.694762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:33.882 qpair failed and we were unable to recover it. 00:28:34.154 [2024-05-15 17:13:12.695167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.154 [2024-05-15 17:13:12.695426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.154 [2024-05-15 17:13:12.695455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.154 qpair failed and we were unable to recover it. 00:28:34.154 [2024-05-15 17:13:12.695845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.154 [2024-05-15 17:13:12.696235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.154 [2024-05-15 17:13:12.696262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.154 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.696512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.696790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.696819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.697129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.697525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.697564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.697944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.698176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.698205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.698559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.698851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.698878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.699257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.699616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.699644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.700099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.700495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.700521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.700841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.701082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.701110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.701500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.701902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.701929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.702304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.702685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.702713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.703101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.703350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.703375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.703672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.703913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.703942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.704366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.704720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.704746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.705189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.705519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.705555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.705951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.706304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.706331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.706716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.707083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.707109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.707476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.707832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.707859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.708128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.708504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.708529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.708937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.709295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.709321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.709701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.709955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.709981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.710286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.710650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.710677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.711075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.711456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.711482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.711937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.712191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.712217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.712608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.712949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.712975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.713362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.713624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.713651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.714086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.714460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.714485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.714738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.715144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.155 [2024-05-15 17:13:12.715170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.155 qpair failed and we were unable to recover it. 00:28:34.155 [2024-05-15 17:13:12.715568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.715802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.715828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.716217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.716657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.716684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.717077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.717454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.717479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.717861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.718241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.718267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.718657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.719068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.719094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.719528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.719808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.719835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.720264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.720518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.720543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.720913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.721267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.721293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.721694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.722073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.722099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.722450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.722694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.722726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.723127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.723483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.723509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.723888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.724290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.724315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.724699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.725083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.725109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.725502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.725875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.725904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.726300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.726657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.726687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.727074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.727467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.727493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.727868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.728235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.728263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.728649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.729019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.729044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.729427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.729661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.729688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.730064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.730432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.730459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.730823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.731225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.731251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.731651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.732036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.732062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.732318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.732732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.732759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.733009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.733399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.733425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.156 qpair failed and we were unable to recover it. 00:28:34.156 [2024-05-15 17:13:12.733834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.156 [2024-05-15 17:13:12.734095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.734120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.734383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.734709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.734736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.735067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.735373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.735398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.735839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.736168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.736195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.736566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.736996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.737022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.737408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.737749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.737777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.738131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.738463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.738489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.738864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.739226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.739252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.739634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.740012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.740038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.740424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.740643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.740670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.741083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.741459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.741486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.741893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.742265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.742291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.742663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.743032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.743059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.743425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.743805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.743833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.744264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.744625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.744652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.745029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.745394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.745420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.745860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.746255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.746282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.746650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.747013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.747039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.747408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.747821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.747848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.748241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.748614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.748641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.749017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.749397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.749429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.749823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.750176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.750203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.750629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.751042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.157 [2024-05-15 17:13:12.751069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.157 qpair failed and we were unable to recover it. 00:28:34.157 [2024-05-15 17:13:12.751517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.751949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.751978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.752230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.752626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.752653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.753034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.753484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.753511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.753907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.754322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.754348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.754802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.755050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.755079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.755451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.755815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.755842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.756204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.756578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.756606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.757005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.757406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.757438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.757772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.758178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.758204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.758624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.759003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.759029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.759409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.759765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.759792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.760189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.760575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.760602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.760747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.761106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.761133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.761506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.761868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.761896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.762248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.762531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.762570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.762945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.763313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.763340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.763730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.764029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.764055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.764436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.764817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.764853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.765230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.765484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.765514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.765926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.766306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.766333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.766716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.767053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.767079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.767468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.767720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.767748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.768118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.768486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.768512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.768944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.769304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.769330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.769713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.770097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.770123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.770515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.770991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.771019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.158 [2024-05-15 17:13:12.771387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.771749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.158 [2024-05-15 17:13:12.771777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.158 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.772144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.772578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.772612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.773026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.773379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.773405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.773752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.774103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.774129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.774514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.774899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.774928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.775297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.775668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.775697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.776086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.776456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.776483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.776851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.777259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.777285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.777560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.777937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.777963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.778331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.778703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.778730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.779107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.779477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.779502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.779880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.780170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.780199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.780579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.780954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.780981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.781357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.781750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.781777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.782124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.782498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.782525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.782901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.783369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.783395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.783685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.784076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.784103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.784488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.784940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.784969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.785324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.785697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.785726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.786179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.786520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.786558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.786895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.787245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.787272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.787629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.788013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.788039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.788436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.788679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.788707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.789074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.789455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.789481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.789873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.790274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.790301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.790673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.790999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.791026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.159 qpair failed and we were unable to recover it. 00:28:34.159 [2024-05-15 17:13:12.791426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.159 [2024-05-15 17:13:12.791650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.791677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.792060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.792321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.792347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.792710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.793085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.793112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.793511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.793806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.793834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.794208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.794564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.794593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.794970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.795327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.795353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.795754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.796021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.796047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.796432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.796803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.796832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.797223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.797576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.797604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.797984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.798355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.798383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.798805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.799178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.799203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.799603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.799871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.799899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.800293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.800689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.800718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.801095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.801472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.801503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.803294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.803640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.803680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.804067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.804441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.804470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.804870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.805281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.805307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.805597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.805970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.805997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.806380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.806751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.160 [2024-05-15 17:13:12.806780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.160 qpair failed and we were unable to recover it. 00:28:34.160 [2024-05-15 17:13:12.807191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.808921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.808977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.809426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.809796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.809826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.810195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.810578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.810607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.811007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.812089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.812130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.812512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.812900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.812929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.813318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.813700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.813733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.814092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.814455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.814482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.814871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.815281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.815309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.815582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.816005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.816032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.816428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.816762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.816789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.817179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.817571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.817602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.817955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.818344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.818371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.818711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.819078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.819105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.819380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.819787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.819817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.820193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.820489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.820514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.820929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.821259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.821286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.821559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.821943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.821969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.822337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.822733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.822761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.823133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.823538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.823580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.823933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.824292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.824319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.824690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.825063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.825090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.825498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.825883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.825911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.826284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.826670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.826699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.827060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.827427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.827454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.827822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.828200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.828227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.828595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.829006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.829032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.161 qpair failed and we were unable to recover it. 00:28:34.161 [2024-05-15 17:13:12.829410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.161 [2024-05-15 17:13:12.829756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.829785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.830150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.830526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.830571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.830962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.831336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.831363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.831733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.832135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.832161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.832559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.832923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.832949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.833310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.833686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.833713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.834093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.834466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.834493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.834866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.835225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.835251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.835523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.835806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.835834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.836220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.836603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.836630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.836914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.837288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.837314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.837728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.838064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.838091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.838477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.838731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.838763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.839153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.839522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.839562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.839963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.840360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.840387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.840776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.841137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.841163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.841567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.842033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.842059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.842430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.842772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.842801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.843185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.843579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.843606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.843990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.844349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.844375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.844727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.845127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.845153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.845538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.845995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.846022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.846401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.846809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.846836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.162 qpair failed and we were unable to recover it. 00:28:34.162 [2024-05-15 17:13:12.847184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.162 [2024-05-15 17:13:12.847555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.847582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.848006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.848275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.848309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.848718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.849104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.849130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.849529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.849950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.849977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.850334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.850715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.850743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.851119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.851496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.851523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.851915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.852082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.852110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.852503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.852951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.852978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.853320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.853720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.853747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.854117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.854489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.854516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.854947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.855336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.855363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.855737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.856124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.856151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.856517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.856920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.856948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.857319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.857716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.857744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.858133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.858367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.858393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.858844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.859216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.859242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.859542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.859962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.859988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.860363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.860697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.860724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.861114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.861528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.861566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.861992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.862384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.862409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.862775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.863147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.863174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.863567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.863936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.863964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.864355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.864750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.864777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.865146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.865527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.865565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.865916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.866312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.866338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.163 qpair failed and we were unable to recover it. 00:28:34.163 [2024-05-15 17:13:12.866763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.867161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.163 [2024-05-15 17:13:12.867188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.867576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.867946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.867973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.868256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.868647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.868674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.869028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.869436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.869463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.869824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.870092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.870117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.870503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.870903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.870930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.871310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.871714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.871742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.872098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.872481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.872508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.872864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.873256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.873283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.873656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.874043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.874069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.874451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.874826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.874853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.875235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.875637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.875664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.876040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.876415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.876441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.876798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.877202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.877234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.877627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.878035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.878064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.878274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.878650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.878680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.879054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.879435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.879461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.879844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.880215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.880242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.880511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.880779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.880810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.881190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.881563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.881592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.881963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.882332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.882358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.882616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.883052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.883079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.883353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.883694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.883721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.164 [2024-05-15 17:13:12.884105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.884449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.164 [2024-05-15 17:13:12.884482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.164 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.884883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.885247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.885274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.885510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.885887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.885914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.886174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.886518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.886544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.886950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.887330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.887356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.887752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.888130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.888155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.888535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.888828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.888855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.889254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.889625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.889653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.890018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.890444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.890470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.890831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.891211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.891237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.891619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.891991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.892025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.892446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.892816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.892844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.893231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.893581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.893608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.893991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.894368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.894396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.894826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.895180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.895206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.895574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.895932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.895958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.896300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.896702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.896731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.897104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.897453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.897478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.897852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.898214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.898239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.898606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.898993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.899019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.899294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.899450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.899483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.165 qpair failed and we were unable to recover it. 00:28:34.165 [2024-05-15 17:13:12.899841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.900203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.165 [2024-05-15 17:13:12.900229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.900631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.900983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.901010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.901386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.901752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.901780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.902204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.902607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.902635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.903033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.903256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.903285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.903653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.904037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.904063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.904485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.904770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.904797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.905211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.905569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.905598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.905993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.906340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.906366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.906749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.907026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.907053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.907424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.907800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.907829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.908197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.908578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.908605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.908987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.909349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.909375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.909669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.910050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.910077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.910450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.910853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.910880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.911330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.911732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.911759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.912127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.912496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.912523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.912901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.913238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.913264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.913665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.914073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.914099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.914400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.914581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.914613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.915012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.915348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.915374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.915750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.916119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.916145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.916516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.916938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.916965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.917218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.917613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.917640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.166 [2024-05-15 17:13:12.917920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.918278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.166 [2024-05-15 17:13:12.918304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.166 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.918670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.919049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.919077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.919446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.919871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.919900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.920250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.920630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.920658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.921016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.921390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.921417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.921824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.922205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.922232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.922626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.923006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.923034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.923374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.923686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1645671 Killed "${NVMF_APP[@]}" "$@" 00:28:34.167 [2024-05-15 17:13:12.923715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.924033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.924443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.924470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:34.167 [2024-05-15 17:13:12.924856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:34.167 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:34.167 [2024-05-15 17:13:12.925213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.925240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:34.167 [2024-05-15 17:13:12.925596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.167 [2024-05-15 17:13:12.926000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.926027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.926234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.926655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.926682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.927038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.927401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.927427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.927695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.927956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.927982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.928354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.928788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.928816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.929174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.929571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.929600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.929954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.930319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.930348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.930663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.931072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.931099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.931449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.931827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.931856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.932261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.932650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.932680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.933067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.933418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.933447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 [2024-05-15 17:13:12.933830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1646506 00:28:34.167 [2024-05-15 17:13:12.934066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 [2024-05-15 17:13:12.934096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.167 qpair failed and we were unable to recover it. 00:28:34.167 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1646506 00:28:34.167 [2024-05-15 17:13:12.934459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.167 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1646506 ']' 00:28:34.167 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:34.168 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.168 [2024-05-15 17:13:12.934863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.934896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:34.168 [2024-05-15 17:13:12.935306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.168 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:34.168 [2024-05-15 17:13:12.935539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.935606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 17:13:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.168 [2024-05-15 17:13:12.936002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.936398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.936429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.936698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.936949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.936982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.937284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.937534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.937581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.937855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.938298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.938327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.938718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.939072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.939102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.939330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.939679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.939710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.939996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.940298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.940328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.940660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.941087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.941119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.941402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.941851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.941881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.942262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.942511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.942542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.942914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.943259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.943290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.943660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.944062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.944093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.944368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.944752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.944782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.945158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.945516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.945556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.945999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.946378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.946407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.946803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.947055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.947088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.947492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.947815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.947847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.948231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.948610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.948639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.948906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.949268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.949297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.949701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.950155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.950184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.950570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.950844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.950871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.168 qpair failed and we were unable to recover it. 00:28:34.168 [2024-05-15 17:13:12.951240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.168 [2024-05-15 17:13:12.951664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.951693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.952109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.952350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.952377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.952737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.953145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.953175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.953567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.953972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.954001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.954271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.954675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.954705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.955080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.955484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.955512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.955890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.956304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.956333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.956736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.957107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.957135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.957516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.957924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.957955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.958310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.958735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.958765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.958993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.959383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.959411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.959817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.960076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.960105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.960388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.960757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.960788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.961158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.961425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.961455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.961874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.962144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.962176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.962569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.962838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.962869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.963253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.963700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.963730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.964156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.964564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.964596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.964997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.965378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.965407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.965808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.966073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.966101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.966466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.966609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.966636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.967045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.967371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.967399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.967658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.968039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.968068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.169 [2024-05-15 17:13:12.968450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.968729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.169 [2024-05-15 17:13:12.968758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.169 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.969228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.969500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.969531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.969917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.970301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.970330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.970721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.970976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.971003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.971295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.971610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.971640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.972016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.972403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.972431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.972706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.972968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.972996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.973258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.973518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.973567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.973973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.974106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.974136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.974532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.974794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.974821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.974952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.975211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.975240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.975643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.975892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.975923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.976305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.976649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.976679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.977104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.977491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.977520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.977928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.978311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.978340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.978618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.978735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.170 [2024-05-15 17:13:12.978761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.170 qpair failed and we were unable to recover it. 00:28:34.170 [2024-05-15 17:13:12.979129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.979500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.979534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.979951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.980336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.980368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.980747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.981144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.981175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.981586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.981877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.981904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.982182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.982584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.982614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.982922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.983387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.983416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.983809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.984212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.984242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.984643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.984782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.984812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.985087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.985343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.985373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.985620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.985889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.985916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.986298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.986686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.986715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.987102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.987339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.987368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.987735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.988145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.988175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.988235] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:28:34.441 [2024-05-15 17:13:12.988293] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.441 [2024-05-15 17:13:12.988538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.988801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.988828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.989083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.989481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.989511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.989907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.990316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.990345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.990749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.991023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.991056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.991447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.991832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.991863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.992262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.992644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.992675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.441 [2024-05-15 17:13:12.993049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.993278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.441 [2024-05-15 17:13:12.993310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.441 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:12.993587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.994016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.994046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:12.994435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.994870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.994902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:12.995145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.995493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.995523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:12.995925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.996249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.996280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:12.996768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.997153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.997183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:12.997581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.997985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.998015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:12.998425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.998692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.998730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:12.999144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.999528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:12.999570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:12.999956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.000338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.000368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.000760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.001156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.001188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.001594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.001987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.002015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.002413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.002801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.002834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.003216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.003601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.003631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.004045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.004369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.004402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.004817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.005204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.005234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.005638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.006061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.006091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.006462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.006835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.006871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.007275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.007661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.007693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.008088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.008369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.008399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.008805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.009200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.009230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.009617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.009962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.009992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.010412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.010809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.010840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.011231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.011586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.011616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.012000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.012382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.012413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.012679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.012943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.012970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.013376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.013750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.013780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.014054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.014421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.014457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.014878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.015252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.015280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.442 [2024-05-15 17:13:13.015662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.015922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.442 [2024-05-15 17:13:13.015950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.442 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.016355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.016739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.016769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.017199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.017584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.017614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.018052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.018438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.018467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.018871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.019255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.019285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.019631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.019892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.019920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.020314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.020702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.020732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.021157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.021541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.021584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.021970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.022296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.022330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.022713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.023102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.023130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.023377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.023758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.023788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.024188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.024589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.024618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.443 [2024-05-15 17:13:13.024891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.025097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.025126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.025498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.025877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.025908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.026296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.026681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.026712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.027069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.027434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.027463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.027841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.028219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.028250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.028652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.028918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.028948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.029353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.029739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.029775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.030131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.030451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.030482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.030859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.031224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.031254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.031639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.032028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.032057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.032409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.032747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.032777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.033023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.033390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.033418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.033813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.034188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.034216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.034467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.034828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.034858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.443 qpair failed and we were unable to recover it. 00:28:34.443 [2024-05-15 17:13:13.035259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.443 [2024-05-15 17:13:13.035639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.035669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.036055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.036319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.036347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.036603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.036989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.037030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.037491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.037855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.037884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.038283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.038524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.038601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.038888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.039270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.039299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.039694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.040084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.040112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.040485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.040907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.040937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.041339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.041580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.041608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.041891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.042269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.042298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.042702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.043100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.043127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.043515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.043943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.043975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.044388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.044780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.044809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.045062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.045437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.045465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.045864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.046281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.046310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.046590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.046960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.046988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.047375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.047508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.047533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.047920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.048265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.048295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.048716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.049102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.049132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.049514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.049773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.049804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.050246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.050648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.050679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.051053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.051438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.051468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.051865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.052244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.052275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.052660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.053050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.053078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.053477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.053901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.053932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.054299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.054653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.054684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.055072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.055452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.444 [2024-05-15 17:13:13.055481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.444 qpair failed and we were unable to recover it. 00:28:34.444 [2024-05-15 17:13:13.055848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.056220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.056249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.056637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.056892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.056923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.057296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.057576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.057607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.057989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.058361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.058390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.058797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.059206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.059238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.059507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.059955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.059986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.060240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.060661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.060692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.061068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.061470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.061500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.061906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.062287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.062315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.062716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.063096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.063124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.063513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.063780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.063810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.064159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.064558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.064588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.064952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.065305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.065334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.065694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.065936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.065967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.066320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.066639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.066668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.066934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.067307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.067337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.067721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.068113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.068143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.068539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.068933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.068963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.069323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.069716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.069746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.070109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.070484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.070515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-05-15 17:13:13.070915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.071318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.445 [2024-05-15 17:13:13.071347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.071627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.072014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.072042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.072444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.072828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.072860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.073092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.073479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.073509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.073901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.074276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.074305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.074691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.074948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.074977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.075362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.075745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.075776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.076133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.076517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.076565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.076787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.077174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.077202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.077595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.078002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.078030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.078444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.078823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.078854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.079272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.079574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.446 [2024-05-15 17:13:13.079687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.079716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.080115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.080471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.080499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.080830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.081206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.081235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.081627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.082036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.082066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-05-15 17:13:13.082452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.446 [2024-05-15 17:13:13.082831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.082862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.083218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.083451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.083481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.083900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.084274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.084306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.084711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.084955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.084985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.085360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.085743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.085773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.086151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.086382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.086411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.086760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.087120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.087150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.087565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.087860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.087893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.088284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.088580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.088610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.089023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.089410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.089438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.089895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.090146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.090173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.090569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.090962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.090993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.091351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.091749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.091780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.092177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.092578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.092608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.092999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.093380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.093410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.093803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.094186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.094215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.094627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.095021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.095049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.095440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.095850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.095879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.096270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.096522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.096561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.096960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.097106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.097134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.097544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.097957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.097986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-05-15 17:13:13.098363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.447 [2024-05-15 17:13:13.098749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.098781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.099139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.099528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.099573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.099857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.100223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.100251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.100629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.101022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.101055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.101432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.101830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.101860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.102012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.102399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.102429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.102732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.103134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.103163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.103541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.103969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.103998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.104377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.104761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.104792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.105196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.105600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.105631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.106017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.106388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.106418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.106613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.107004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.107033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.107418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.107776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.107806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.108197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.108572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.108601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.109047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.109410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.109439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.109823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.110157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.110187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.110585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.110974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.111002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.111404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.111792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.111825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.112232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.112574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.112605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.112989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.113234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.113266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.448 qpair failed and we were unable to recover it. 00:28:34.448 [2024-05-15 17:13:13.113647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.448 [2024-05-15 17:13:13.113980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.114009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.114355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.114712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.114743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.115136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.115507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.115537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.115944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.116257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.116287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.116659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.117044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.117072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.117489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.117841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.117871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.118258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.118632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.118663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.119044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.119419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.119448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.119839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.120211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.120243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.120590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.120862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.120896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.121283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.121647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.121677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.121945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.122319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.122348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.122728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.123080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.123108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.123490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.123857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.123887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.124229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.124642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.124672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.125075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.125453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.125484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.125851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.126216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.126245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.126587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.126964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.126996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.127385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.127808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.127840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.128205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.128580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.128612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.128986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.129369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.129400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.449 qpair failed and we were unable to recover it. 00:28:34.449 [2024-05-15 17:13:13.129868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.449 [2024-05-15 17:13:13.130249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.130277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.130680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.131075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.131105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.131517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.131906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.131938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.132315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.132667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.132697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.133080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.133457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.133487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.133847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.134259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.134289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.134675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.135052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.135082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.135454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.135827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.135858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.136119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.136556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.136587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.137020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.137442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.137472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.137851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.138076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.138107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.138490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.138839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.138869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.139252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.139628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.139658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.140091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.140472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.140501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.140921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.141298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.141327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.141707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.142106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.142135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.142476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.142851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.142882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.143264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.143638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.143669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.144048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.144411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.144441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.144826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.145197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.145227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.450 [2024-05-15 17:13:13.145600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.146007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.450 [2024-05-15 17:13:13.146036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.450 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.146412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.146756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.146786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.147153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.147387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.147415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.147824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.148191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.148219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.148580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.148955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.148985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.149380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.149689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.149721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.150125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.150495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.150526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.150941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.151318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.151348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.151719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.152109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.152138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.152537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.152943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.152973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.153345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.153716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.153747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.154130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.154355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.154384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.154753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.155141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.155169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.155557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.155976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.156005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.156419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.156793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.156824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.157227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.157480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.157511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.157908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.158324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.158355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.158620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.158873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.158904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.159274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.159620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.159651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.159948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.160229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.160265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.160653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.161041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.161071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.451 qpair failed and we were unable to recover it. 00:28:34.451 [2024-05-15 17:13:13.161418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.161791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.451 [2024-05-15 17:13:13.161821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.162194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.162565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.162594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.162890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.163268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.163298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.163688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.164068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.164096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.164488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.164874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.164903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.165243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.165634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.165665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.166053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.166396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.166425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.166789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.167166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.167195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.167594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.167999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.168033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.168399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.168744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.168773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.168999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.169363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.169392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.169777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.170150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.170180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.170579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.170987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.171017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.171389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.171747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.171778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.172147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.172360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.172388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.172749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.173096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.173125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.452 qpair failed and we were unable to recover it. 00:28:34.452 [2024-05-15 17:13:13.173505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.452 [2024-05-15 17:13:13.173859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.173888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.174265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.174632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.174662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.175047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.175419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.175454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.175705] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.453 [2024-05-15 17:13:13.175726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.175757] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.453 [2024-05-15 17:13:13.175767] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.453 [2024-05-15 17:13:13.175773] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.453 [2024-05-15 17:13:13.175780] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.453 [2024-05-15 17:13:13.175956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:34.453 [2024-05-15 17:13:13.176152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.176181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.176127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:34.453 [2024-05-15 17:13:13.176289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:34.453 [2024-05-15 17:13:13.176289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:34.453 [2024-05-15 17:13:13.176571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.176985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.177015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.177389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.177753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.177784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.178160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.178499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.178528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.178930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.179312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.179340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.179727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.180100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.180128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.180505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.180810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.180841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.181210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.181586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.181618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.181994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.182407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.182434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.182701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.183007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.183037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.183514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.183916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.183947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.184204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.184460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.184491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.184908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.185282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.185313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.185693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.185978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.186006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.186379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.186757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.186787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.187165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.187410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.187440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.453 qpair failed and we were unable to recover it. 00:28:34.453 [2024-05-15 17:13:13.187802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.188031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.453 [2024-05-15 17:13:13.188058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.188326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.188697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.188727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.189038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.189288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.189319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.189697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.190092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.190121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.190512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.190897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.190927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.191203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.191447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.191475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.191774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.192162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.192191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.192562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.192948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.192978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.193271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.193634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.193664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.193914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.194343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.194373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.194626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.195018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.195047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.195436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.195821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.195852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.196151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.196526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.196566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.197007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.197387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.197415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.197818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.198183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.198212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.198477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.198849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.198877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.199235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.199490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.199519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.199970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.200249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.200278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.200541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.200949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.200979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.201359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.201710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.201740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.202144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.202402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.202433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.202580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.202869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.202898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.454 qpair failed and we were unable to recover it. 00:28:34.454 [2024-05-15 17:13:13.203282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.454 [2024-05-15 17:13:13.203540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.203589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.203996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.204349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.204378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.204704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.204955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.204982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.205239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.205615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.205646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.205896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.206273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.206302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.206589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.206855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.206887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.207283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.207675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.207706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.208180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.208409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.208438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.208804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.209122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.209152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.209620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.210015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.210051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.210311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.210567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.210597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.210996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.211401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.211432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.211825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.212059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.212085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.212439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.212678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.212708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.213067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.213449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.213477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.213873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.214088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.214116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.214491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.214720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.214748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.215148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.215362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.215389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.215752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.216148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.216177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.216566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.216819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.216852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.217217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.217611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.455 [2024-05-15 17:13:13.217641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.455 qpair failed and we were unable to recover it. 00:28:34.455 [2024-05-15 17:13:13.218057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.218446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.218475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.218848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.219197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.219226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.219628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.219900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.219931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.220311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.220693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.220725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.221009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.221281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.221310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.221717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.221977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.222006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.222399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.222756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.222787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.223183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.223590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.223620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.224042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.224300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.224334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.224721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.225125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.225155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.225576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.225953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.225983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.226253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.226650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.226681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.227063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.227444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.227473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.227715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.227989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.228018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.228410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.228794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.228825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.229206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.229420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.229448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.229803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.230052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.230079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.230349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.230728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.230758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.230977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.231356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.231387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.231764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.232146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.232175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.232564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.232811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.232840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.233232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.233453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.233481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.233839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.234220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.234248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.234474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.234868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.234897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.235278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.235644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.235674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.456 [2024-05-15 17:13:13.236096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.236351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.456 [2024-05-15 17:13:13.236378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.456 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.236756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.236976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.237004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.237458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.237682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.237712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.238093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.238347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.238376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.238740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.239148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.239176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.239538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.239820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.239850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.240257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.240473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.240501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.240830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.241255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.241283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.241525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.241961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.241991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.242380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.242759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.242791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.243166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.243540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.243584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.243841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.244230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.244258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.244644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.245027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.245056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.245359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.245732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.245761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.246137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.246518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.246558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.246921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.247306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.247336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.247568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.247952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.247981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.248368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.248742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.248773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.249189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.249407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.249435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.249629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.249869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.249897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.250255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.250629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.250659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.251053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.251389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.251421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.251815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.252029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.252058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.252469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.252859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.252888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.253121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.253362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.253393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.253791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.254138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.254166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.254564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.254951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.254980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.255354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.255778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.255806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.256210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.256534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.256581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.256996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.257209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.257237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.457 qpair failed and we were unable to recover it. 00:28:34.457 [2024-05-15 17:13:13.257592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.457 [2024-05-15 17:13:13.257834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.257863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.258244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.258627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.258657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.259061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.259412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.259440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.259807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.260175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.260204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.260581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.260959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.260989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.261363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.261741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.261770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.262155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.262388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.262418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.262758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.263138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.263167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.263589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.263976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.264004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.264119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.264460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.264489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.264871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.265241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.265270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.265486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.265851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.265880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.458 [2024-05-15 17:13:13.266273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.266626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.458 [2024-05-15 17:13:13.266654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.458 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.267037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.267418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.267448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.267905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.268108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.268135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.268483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.268688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.268719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.269083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.269454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.269485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.269848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.270220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.270249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.270654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.271045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.271073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.271431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.271791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.271823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.272188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.272581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.272610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.273008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.273383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.273412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.273647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.274039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.274068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.274426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.274669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.274698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.275068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.275302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.275329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.275701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.275973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.276002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.276401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.276746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.276775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.729 qpair failed and we were unable to recover it. 00:28:34.729 [2024-05-15 17:13:13.277171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.729 [2024-05-15 17:13:13.277444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.277474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.277885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.278149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.278178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.278563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.278794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.278823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.279200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.279541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.279583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.279974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.280350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.280379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.280745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.280839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.280864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.281221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.281476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.281503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.281878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.282123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.282152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.282515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.282929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.282960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.283204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.283432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.283460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.283845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.284227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.284256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.284645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.284747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.284775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.285119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.285492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.285520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.285794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.286170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.286199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.286649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.286995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.287024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.287392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.287607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.287634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.288053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.288440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.288470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.288848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.289222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.289252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.289632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.289899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.289928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.290160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.290497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.290525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.290818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.291197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.291226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.291609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.291988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.292020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.292394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.292814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.292845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.293075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.293472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.293501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.293900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.294111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.294138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.294512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.294876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.730 [2024-05-15 17:13:13.294906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.730 qpair failed and we were unable to recover it. 00:28:34.730 [2024-05-15 17:13:13.295289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.295652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.295682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.296075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.296285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.296313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.296717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.297117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.297146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.297370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.297838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.297869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.298139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.298396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.298428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.298778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.299157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.299185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.299581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.299675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.299700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.300109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.300486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.300517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.300998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.301377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.301407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.301636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.301976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.302005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.302402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.302629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.302659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.303081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.303454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.303483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.303854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.304206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.304235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.304646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.304909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.304938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.305336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.305575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.305605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.305986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.306374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.306402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.306790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.306921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.306950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.307397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.307752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.307781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.308068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.308453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.308482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.308847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.309267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.309296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.309570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.309960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.309992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.310387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.310655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.310684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.311070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.311464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.311493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.311859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.312243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.312270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.312659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.313125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.731 [2024-05-15 17:13:13.313154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.731 qpair failed and we were unable to recover it. 00:28:34.731 [2024-05-15 17:13:13.313574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.313962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.313992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.314407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.314752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.314783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.315159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.315536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.315576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.315976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.316237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.316267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.316629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.317025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.317055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.317413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.317820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.317850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.318233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.318442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.318490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.318763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.318968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.318998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.319230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.319571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.319603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.320038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.320440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.320468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.320853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.321198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.321227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.321627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.322021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.322050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.322459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.322847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.322877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.323272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.323489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.323517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.323775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.324179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.324209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.324442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.324867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.324897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.325300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.325567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.325603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.326016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.326391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.326419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.326803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.327146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.327175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.327568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.327957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.327986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.328378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.328755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.328785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.329015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.329215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.329242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.329487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.329777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.329806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.330189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.330578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.330609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.732 [2024-05-15 17:13:13.330989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.331402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.732 [2024-05-15 17:13:13.331436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.732 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.331803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.332183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.332211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.332590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.332824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.332860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.333233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.333626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.333656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.334069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.334415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.334445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.334841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.335203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.335233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.335637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.336015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.336043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.336422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.336753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.336782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.337037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.337450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.337479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.337903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.338281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.338309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.338572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.338991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.339021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.339250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.339653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.339684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.340117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.340526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.340594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.340998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.341380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.341411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.341700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.342090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.342120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.342497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.342903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.342933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.343295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.343672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.343703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.344105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.344355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.344382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.344747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.345123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.345151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.345535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.345642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.345668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.346104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.346475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.346504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.346939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.347341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.347371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.347753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.348139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.348169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.348506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.348748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.348776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.349139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.349526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.349565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.349945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.350322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.350352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.350731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.351122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.351151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.733 qpair failed and we were unable to recover it. 00:28:34.733 [2024-05-15 17:13:13.351481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.351725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.733 [2024-05-15 17:13:13.351756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.352152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.352557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.352589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.353025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.353411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.353440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.353826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.354167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.354196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.354579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.354956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.354986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.355372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.355743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.355774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.356035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.356252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.356283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.356736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.357012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.357041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.357496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.357829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.357861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.358279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.358662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.358692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.359047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.359273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.359301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.359695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.360096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.360125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.360503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.360897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.360928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.361314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.361694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.361725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.362146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.362570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.362601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.362991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.363247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.363274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.363656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.364051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.364080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.734 qpair failed and we were unable to recover it. 00:28:34.734 [2024-05-15 17:13:13.364476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.734 [2024-05-15 17:13:13.364855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.364884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.365278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.365649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.365679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.366082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.366454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.366484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.366840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.367088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.367116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.367355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.367723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.367753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.368115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.368502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.368532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.368922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.369138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.369165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.369541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.369807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.369835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.370192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.370581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.370612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.371052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.371427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.371455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.371860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.372078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.372105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.372477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.372854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.372884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.373274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.373650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.373679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.374076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.374337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.374365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.374746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.375125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.375155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.375505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.375611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.375637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.376054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.376306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.376337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.376732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.377112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.377140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.377492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.377883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.377915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.378299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.378675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.378706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.379097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.379332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.379361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.379718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.380067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.380096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.380485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.380864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.380894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.381156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.381532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.381573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.381997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.382377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.382405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.382812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.383202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.383230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.735 qpair failed and we were unable to recover it. 00:28:34.735 [2024-05-15 17:13:13.383608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.735 [2024-05-15 17:13:13.383996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.384026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.384260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.384650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.384680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.385075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.385456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.385485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.385910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.386285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.386315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.386541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.386705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.386736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.387148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.387527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.387582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.387856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.388078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.388107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.388463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.388724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.388753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.389133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.389507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.389535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.389817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.390198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.390226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.390581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.390965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.390994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.391390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.391760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.391789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.392175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.392565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.392595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.392816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.393191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.393220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.393441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.393798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.393829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.394216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.394430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.394457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.394832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.395211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.395239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.395612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.395995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.396024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.396413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.396634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.396664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.396873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.397253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.397282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.397711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.398097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.398126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.398442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.398825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.398854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.399247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.399623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.399654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.736 [2024-05-15 17:13:13.400049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.400418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.736 [2024-05-15 17:13:13.400446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.736 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.400829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.401247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.401275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.401658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.402042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.402070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.402428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.402831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.402860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.403300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.403517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.403555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.403952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.404162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.404188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.404288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.404655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.404686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.405101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.405367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.405401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.405670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.406049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.406077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.406466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.406815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.406845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.407252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.407668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.407699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.408118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.408370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.408400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.408625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.409054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.409083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.409465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.409727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.409756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.410117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.410341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.410368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.410597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.410969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.410998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.411389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.411650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.411679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.412099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.412505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.412533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.412924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.413351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.413382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.413774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.414028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.414056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.414409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.414789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.414819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.415199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.415578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.415615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.415975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.416393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.416421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.416814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.417067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.417094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.417453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.417668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.417696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.737 [2024-05-15 17:13:13.418072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.418467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.737 [2024-05-15 17:13:13.418495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.737 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.418869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.419123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.419151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.419511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.419745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.419774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.420174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.420409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.420436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.420820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.421201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.421229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.421615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.422008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.422037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.422396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.422646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.422676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.423086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.423462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.423490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.423745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.424141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.424169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.424562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.424824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.424856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.425101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.425495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.425523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.425915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.426127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.426156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.426568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.426969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.426998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.427379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.427749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.427780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.428188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.428596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.428626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.428867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.429249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.429280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.429650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.430042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.430072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.430452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.430839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.430869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.431263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.431623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.431653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.431946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.432375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.432404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.432644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.433056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.433085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.433380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.433752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.433782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.434171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.434380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.434406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.434802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.435184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.435212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.435461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.435769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.435799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.738 [2024-05-15 17:13:13.436179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.436434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.738 [2024-05-15 17:13:13.436467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.738 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.436844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.437098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.437129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.437511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.437897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.437927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.438161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.438539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.438581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.438995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.439374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.439402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.439691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.440086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.440116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.440356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.440585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.440616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.440998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.441381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.441410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.441701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.442020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.442048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.442450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.442697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.442726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.443136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.443511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.443560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.443855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.444127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.444157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.444421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.444778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.444808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.445110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.445504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.445531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.445961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.446358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.446388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.446759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.447074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.447104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.447430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.447811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.447842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.448241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.448620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.448650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.449048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.449440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.449470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.449723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.450110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.450139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.450398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.450852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.450888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.451149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.451536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.451578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.739 qpair failed and we were unable to recover it. 00:28:34.739 [2024-05-15 17:13:13.451961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.452328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.739 [2024-05-15 17:13:13.452356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.452590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.452958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.452986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.453219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.453629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.453659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.453940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.454205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.454233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.454577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.454928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.454957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.455336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.455716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.455746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.456151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.456536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.456580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.457023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.457230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.457257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.457615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.458005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.458040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.458409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.458793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.458823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.459178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.459578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.459609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.459976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.460316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.460345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.460598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.460854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.460882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.461165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.461524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.461562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.461994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.462373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.462402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.462801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.463183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.463213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.463608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.464040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.464069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.740 qpair failed and we were unable to recover it. 00:28:34.740 [2024-05-15 17:13:13.464333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.464578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.740 [2024-05-15 17:13:13.464607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.465078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.465292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.465322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.465598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.465987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.466016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.466234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.466492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.466521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.466913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.467298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.467328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.467594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.467897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.467926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.468051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.468446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.468474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.468772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.469031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.469061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.469285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.469696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.469727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.470038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.470416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.470446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.470707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.471095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.471125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.471523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.471782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.471812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.472214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.472594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.472623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.472961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.473340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.473370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.473632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.474061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.474091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.474501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.474890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.474920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.475332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.475782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.475811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.476198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.476411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.476439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.476855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.477063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.477091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.477357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.477719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.477749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.478153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.478557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.478588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.478849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.479209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.479237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.479636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.479977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.741 [2024-05-15 17:13:13.480006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.741 qpair failed and we were unable to recover it. 00:28:34.741 [2024-05-15 17:13:13.480232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.480575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.480606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.480857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.481230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.481260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.481489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.481849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.481880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.482292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.482674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.482704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.482968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.483370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.483400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.483700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.484089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.484118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.484502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.484939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.484969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.485359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.485737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.485770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.486003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.486376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.486405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.486686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.487059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.487087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.487462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.487869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.487899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.488290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.488645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.488677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.489067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.489439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.489468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.489692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.490073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.490102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.490512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.490952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.490981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.491379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.491754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.491784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.492178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.492570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.492602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.492996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.493370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.493399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.493628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.493877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.493907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.494269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.494638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.494669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.495078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.495191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.495218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.495583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.496024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.496053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.496434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.496831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.496864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.497250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.497626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.497656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.498030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.498439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.498467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.498860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.499236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.499264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.499720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.500086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.500115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.742 qpair failed and we were unable to recover it. 00:28:34.742 [2024-05-15 17:13:13.500514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.742 [2024-05-15 17:13:13.500976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.501007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.501395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.501836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.501868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.502097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.502492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.502523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.502786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.503237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.503266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.503665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.504073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.504102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.504503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.504886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.504918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.505304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.505689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.505722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.505986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.506394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.506423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.506794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.507172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.507201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.507596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.507885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.507913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.508298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.508578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.508609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.508980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.509367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.509395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.509795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.510175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.510207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.510373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.510743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.510773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.510990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.511381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.511410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.511646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.512014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.512043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.512455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.512759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.512792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.513020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.513398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.513428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.513831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.514217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.514247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.514629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.515016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.515044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.515395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.515765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.515795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.516190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.516487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.516518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.516978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.517222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.517253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.517578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.518019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.518051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.743 [2024-05-15 17:13:13.518416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.518829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.743 [2024-05-15 17:13:13.518859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.743 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.519127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.519524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.519566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.519943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.520299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.520328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.520736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.521140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.521169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.521445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.521827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.521859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.522218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.522443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.522472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.522742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.523122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.523152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.523570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.523981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.524011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.524431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.524688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.524719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.525080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.525493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.525523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.525924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.526307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.526336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.526704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.526819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.526846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.527198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.527610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.527641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.527742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.528006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.528034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.528419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.528832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.528863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.529129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.529534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.529576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.529998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.530220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.530247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.530608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.530994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.531023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.531392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.531600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.531628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.532007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.532223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.532250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.532611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.533004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.533032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.533252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.533627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.533659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.533885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.534276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.534305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.534687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.534926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.534953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.535332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.535723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.535752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.536144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.536520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.536560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.536948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.537321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.744 [2024-05-15 17:13:13.537351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.744 qpair failed and we were unable to recover it. 00:28:34.744 [2024-05-15 17:13:13.537735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.537999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.538029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.538422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.538808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.538839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.539232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.539464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.539495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.539932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.540309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.540337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.540567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.540774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.540804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.541201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.541579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.541609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.541956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.542396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.542425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.542817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.543192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.543220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.543613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.543845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.543872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.544125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.544580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.544612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.545037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.545449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.545477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.545818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.546206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.546236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.546596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.546870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.546897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.547253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.547647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.547677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.548055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.548414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.548445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.548846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.549226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.549255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.549684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.550085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.550115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.550503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.550930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.550961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.745 qpair failed and we were unable to recover it. 00:28:34.745 [2024-05-15 17:13:13.551349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.745 [2024-05-15 17:13:13.551728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-05-15 17:13:13.551759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-05-15 17:13:13.552110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-05-15 17:13:13.552484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-05-15 17:13:13.552514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-05-15 17:13:13.552791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-05-15 17:13:13.553182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-05-15 17:13:13.553211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-05-15 17:13:13.553601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-05-15 17:13:13.553962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-05-15 17:13:13.553994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-05-15 17:13:13.554262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-05-15 17:13:13.554472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.746 [2024-05-15 17:13:13.554500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:34.746 qpair failed and we were unable to recover it. 00:28:34.746 [2024-05-15 17:13:13.554907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.555292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.555322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.555580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.555958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.555987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.556185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.556579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.556610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.557051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.557443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.557472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.557731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.557967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.557997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.558389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.558746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.558777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.559178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.559429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.559456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.559730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.560133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.560161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.560385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.560798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.560836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.561216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.561597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.561627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.562020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.562398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.562427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.562788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.563165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.563193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.017 qpair failed and we were unable to recover it. 00:28:35.017 [2024-05-15 17:13:13.563399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.017 [2024-05-15 17:13:13.563745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.563774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.564002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.564399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.564427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.564812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.565187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.565215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.565444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.565787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.565816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.566044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.566409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.566439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.566758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.567135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.567162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.567386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.567825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.567861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.568222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.568430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.568458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.568845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.569222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.569251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.569627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.570014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.570043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.570430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.570775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.570806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.571255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.571638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.571669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.572130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.572345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.572371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.572745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.572840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.572864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.573215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.573577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.573607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.574044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.574266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.574294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.574680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.574928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.574955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.575214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.575599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.575630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.575884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.576284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.576313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.576712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.576935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.576962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.577330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.577724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.577754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.578139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.578489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.578518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.578917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.579301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.579330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.579570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.579960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.579989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.580363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.580730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.580760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.581171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.581540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.581580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.582009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.582267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.582294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.582569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.582954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.582982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.583412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.583630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.583661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.018 qpair failed and we were unable to recover it. 00:28:35.018 [2024-05-15 17:13:13.583927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.584312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.018 [2024-05-15 17:13:13.584341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.584572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.584976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.585007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.585387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.585749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.585780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.586156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.586543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.586585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.586986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.587367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.587398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.587778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.587990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.588020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.588461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.588852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.588884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.589143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.589523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.589567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.589959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.590335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.590364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.590733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.591113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.591141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.591573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.591920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.591950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.592407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.592755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.592785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.593028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.593417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.593446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.593623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.594060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.594091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.594478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.594856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.594886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.595111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.595466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.595497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.595887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.596232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.596262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.596640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.597014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.597043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.597435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.597811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.597841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.598087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.598345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.598374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.598739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.599111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.599140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.599538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.599754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.599783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.600023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.600235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.600264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.600637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.600869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.600896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.601269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.601644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.601673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.602047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.602422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.602451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.602704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.603106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.603135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.603366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.603759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.603789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.604163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.604591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.604621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.605039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.605432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.019 [2024-05-15 17:13:13.605460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.019 qpair failed and we were unable to recover it. 00:28:35.019 [2024-05-15 17:13:13.605818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.606214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.606244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.606626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.606889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.606916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.607120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.607477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.607504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.607901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.608109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.608137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.608517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.608907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.608938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.609314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.609693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.609726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.610079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.610463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.610492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.610765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.611188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.611218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.611488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.611864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.611894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.612269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.612690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.612721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.613111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.613452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.613480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.613873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.614253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.614282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.614686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.615091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.615121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.615490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.615915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.615947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.616320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.616568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.616597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.616877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.617255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.617284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.617507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.617897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.617927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.618131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.618484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.618513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.618917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.619327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.619356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.619753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.620126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.620157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.620526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.620792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.620821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.621049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.621456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.621484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.621886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.622299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.622329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.622706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.623095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.623124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.623347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.623717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.623747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.624126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.624505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.624535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.624887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.625072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.625100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.625508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.625864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.625895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.626154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.626443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.626471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.020 qpair failed and we were unable to recover it. 00:28:35.020 [2024-05-15 17:13:13.626862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.627125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.020 [2024-05-15 17:13:13.627154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.627373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.627749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.627779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.628142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.628572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.628602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.629031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.629244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.629271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.629655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.630047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.630077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.630347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.630724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.630753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.630973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.631338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.631367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.631764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.631974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.632002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.632375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.632748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.632779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.633170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.633559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.633589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.633989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.634325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.634354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.634745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.635133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.635162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.635532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.635951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.635980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.636360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.636747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.636778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.637127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.637375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.637408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.637770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.638150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.638179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.638435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.638739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.638769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.639158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.639372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.639399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.639806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.640179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.640209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.640567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.640992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.641021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.641396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.641605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.641634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.642007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.642369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.642398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.642776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.643135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.643164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.643566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.643949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.643978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.644232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.644367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.644396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.644757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.645135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.645165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.645536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.645819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.645846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.021 qpair failed and we were unable to recover it. 00:28:35.021 [2024-05-15 17:13:13.646246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.646476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.021 [2024-05-15 17:13:13.646503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.646884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.647327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.647355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.647739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.647999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.648031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.648252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.648628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.648659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.649047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.649421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.649451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.649830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.650206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.650235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.650625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.650879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.650910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.651295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.651673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.651703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.652100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.652472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.652500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.652613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.652824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.652851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.653227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.653482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.653509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.653888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.654261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.654291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.654692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.655113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.655142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.655529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.655919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.655948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.656327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.656700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.656730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.656948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.657328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.657356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.657745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.658132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.658161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.658542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.658939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.658969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.659333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.659740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.659771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.660032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.660326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.660356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.660587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.660967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.660996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.661378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.661502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.661530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.661919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.662180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.662216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.662611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.662837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.662865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.663115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.663331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.663359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.663623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.664010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.664039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.022 qpair failed and we were unable to recover it. 00:28:35.022 [2024-05-15 17:13:13.664461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.664852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.022 [2024-05-15 17:13:13.664882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.665150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.665394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.665423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.665824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.666219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.666249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.666626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.666870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.666899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.667276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.667660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.667690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.668045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.668421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.668451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.668811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.669054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.669092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.669468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.669687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.669716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.670032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.670406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.670435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.670814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.671158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.671189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.671451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.671832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.671862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.672240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.672616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.672647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.672915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.673349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.673379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.673638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.673985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.674014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.674288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.674491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.674519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.674917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.675283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.675312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.675563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.675869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.675904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.676168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.676558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.676588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.677014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.677271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.677298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.677684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.678093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.678123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.678532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.678948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.678978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.679377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.679730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.679759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.680108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.680367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.680396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.680747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.681124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.681153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.681517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.681936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.681967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.682197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.682430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.682458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.682679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.682953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.682987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.683368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.683747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.683776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.683874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.684222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.684251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.684673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.684935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.684962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.023 qpair failed and we were unable to recover it. 00:28:35.023 [2024-05-15 17:13:13.685378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.023 [2024-05-15 17:13:13.685630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.685658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.685900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.686294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.686322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.686702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.687104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.687133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.687379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.687757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.687788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.688031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.688330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.688359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.688769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.689118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.689147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.689523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.689878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.689907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.690194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.690612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.690643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.691046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.691419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.691448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.691871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.692119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.692149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.692414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.692646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.692677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.693092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.693469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.693498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.693903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.694271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.694299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.694528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.694789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.694823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.695230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.695609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.695638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.696040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.696424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.696453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.696840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.697216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.697246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.697510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.697655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.697685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.697949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.698328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.698356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.698729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.699104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.699132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.699364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.699628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.699659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.700061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.700442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.700473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.700725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.701151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.701181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.701628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.702020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.702049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.702281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.702694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.702724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.703103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.703356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.703383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.703672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.704045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.704074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.704475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.704768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.704796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.705201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.705575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.705605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.024 [2024-05-15 17:13:13.705983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.706141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.024 [2024-05-15 17:13:13.706176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.024 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.706566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.706947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.706976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.707387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.707479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.707504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.707926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.708167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.708197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.708562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.708788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.708817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.709203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.709619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.709649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.709952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.710332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.710362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.710746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.711139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.711169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.711424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.711699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.711728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.712107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.712510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.712539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.712952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.713070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.713097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.713602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.713869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.713897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.714286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.714647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.714679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.715085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.715465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.715495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.715874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.716248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.716278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.716481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.716879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.716909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.717334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.717561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.717592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.717969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.718221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.718249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.718626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.719024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.719054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.719281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.719388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.719414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.719718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.720060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.720089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.720327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.720728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.720757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.721031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.721376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.721404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.721794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.722165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.722196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.722576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.722839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.722869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.723135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.723527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.723569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.724009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.724362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.724392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.724784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.725235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.725265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.725650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.726085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.726114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.726386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.726803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.726836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.025 qpair failed and we were unable to recover it. 00:28:35.025 [2024-05-15 17:13:13.727057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.025 [2024-05-15 17:13:13.727412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.727442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.727810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.728068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.728099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.728527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.728953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.728984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.729362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.729622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.729651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.729888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.730259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.730288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.730661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.730799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.730831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.731231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.731486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.731514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.731896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.732235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.732264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.732649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.733026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.733056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.733303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.733558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.733589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.733825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.734220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.734250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.734482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.734721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.734752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.735185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.735596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.735627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.736052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.736428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.736457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.736676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.737061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.737091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.737475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.737831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.737861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.738242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.738624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.738655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.739045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.739419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.739449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.739826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.740041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.740070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.740282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.740655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.740686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.741082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.741416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.741446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.741672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.742065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.742094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.742469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.742855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.742885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.743258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.743615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.743647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.026 [2024-05-15 17:13:13.744034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.744299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.026 [2024-05-15 17:13:13.744332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.026 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.744701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.744938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.744966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.745353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.745615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.745647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.745911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.746280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.746309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.746705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.747109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.747140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.747542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.747956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.747986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.748372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.748747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.748778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.749136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.749528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.749570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.749994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.750368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.750398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.750805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.751199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.751229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.751603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.752001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.752032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.752402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.752659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.752692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.753085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.753469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.753497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.753882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.754259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.754287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.754563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.754948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.754977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.755245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.755622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.755652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.756013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.756409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.756438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.756792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.757176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.757206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.757431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.757818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.757847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.758223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.758604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.758634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.758874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.759247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.759276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.759668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.760043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.760072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.760448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.760799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.760828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.761065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.761164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.761190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.761370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.761751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.761782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.762176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.762565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.762595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.762984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.763241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.763268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.763648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.764037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.764065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.764501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.764750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.764781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.765062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.765445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.765474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.027 qpair failed and we were unable to recover it. 00:28:35.027 [2024-05-15 17:13:13.765841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.027 [2024-05-15 17:13:13.766248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.766277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.766504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.766890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.766919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.767330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.767625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.767656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.768009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.768383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.768411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.768823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.769121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.769150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.769529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.769914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.769944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.770335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.770720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.770749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.771148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.771529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.771576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.771979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.772252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.772287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.772667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.773112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.773141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.773538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.773794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.773824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.774092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.774471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.774500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.774902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.775280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.775308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.775732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.775857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.775885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.776255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.776644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.776705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.776938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.777313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.777344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.777754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.777995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.778023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.778410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.778801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.778830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.779062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.779316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.779344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.779755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.780141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.780170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.780463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.780716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.780746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.781060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.781274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.781301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.781572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.781790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.781820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.782188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.782463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.782494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.782734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.783104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.783138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.783517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.783941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.783970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.784206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.784582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.784612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.784998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.785372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.785403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.785686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.786112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.786140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.786520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.786905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.028 [2024-05-15 17:13:13.786934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.028 qpair failed and we were unable to recover it. 00:28:35.028 [2024-05-15 17:13:13.787337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.787705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.787735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.788125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.788507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.788536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.788917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.789183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.789213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.789636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.790019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.790048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.790185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.790458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.790490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.790771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.791196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.791224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.791605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.791876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.791902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.792296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.792717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.792746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.792966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.793192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.793222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.793449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.793826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.793857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:35.029 [2024-05-15 17:13:13.794241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.794498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.794528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:35.029 [2024-05-15 17:13:13.795011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.029 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.029 [2024-05-15 17:13:13.795394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.795423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.795802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.796055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.796086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.796196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.796437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.796467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.796856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.797269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.797298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.797666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.798069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.798099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.798486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.798854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.798886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.799275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.799650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.799680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.800078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.800291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.800319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.800680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.801064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.801095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.801498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.801916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.801946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.802300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.802577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.802608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.802848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.803232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.803265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.803657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.804051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.804081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.804282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.804652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.804683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.805126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.805469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.805498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.805892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.806309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.806338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.806529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.806820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.806854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.029 qpair failed and we were unable to recover it. 00:28:35.029 [2024-05-15 17:13:13.807247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.029 [2024-05-15 17:13:13.807639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.807670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.807901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.808121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.808151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.808599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.808982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.809012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.809401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.809750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.809781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.810014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.810384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.810414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.810799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.811212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.811240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.811632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.812011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.812041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.812427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.812799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.812831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.813227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.813584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.813615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.814047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.814421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.814452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.814823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.815206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.815236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.815614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.816009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.816039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.816442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.816831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.816860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.817236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.817494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.817523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.817965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.818387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.818417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.818789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.819209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.819239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.819501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.819902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.819933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.820321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.820544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.820585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.820879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.821204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.821233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.821475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.821921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.821953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.822344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.822771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.822802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.823196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.823598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.823629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.824035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.824287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.824320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.824569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.824955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.824985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.825376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.825625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.825658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.826079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.826470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.826505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.030 qpair failed and we were unable to recover it. 00:28:35.030 [2024-05-15 17:13:13.826892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.030 [2024-05-15 17:13:13.827121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.827153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.827542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.827936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.827966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.828344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.828726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.828757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.829119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.829499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.829528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.829930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.830310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.830341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.830597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.830875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.830905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.831273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.831369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.831398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.831756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.832002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.832030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.832406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.832798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.832828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.833203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.833419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.833453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.833828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.834083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.834113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.834507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.834896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.834929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.835302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.835681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.835712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.835963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.836326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.836358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.836757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.837148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.837178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.031 [2024-05-15 17:13:13.837573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:35.031 [2024-05-15 17:13:13.838022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.838053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.031 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.031 [2024-05-15 17:13:13.838435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.838836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.838867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.839286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.839676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.839707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.840131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.840510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.840541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.840778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.841160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.841189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.841568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.841847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.841880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.842140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.842497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.031 [2024-05-15 17:13:13.842526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.031 qpair failed and we were unable to recover it. 00:28:35.031 [2024-05-15 17:13:13.842929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.843309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.843342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.843773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.844034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.844063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.844450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.844830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.844860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.845239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.845635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.845665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.846078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.846186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.846215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.846621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.846994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.847023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.847426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.847819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.847850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.848221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.848625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.848655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.848937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.849192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.849220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.849629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.850009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.850038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.850460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.850858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.850888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.851270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.851631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.851661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.852029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.852403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.852432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.852795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.853176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.853205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.853522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.853957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.853988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.854375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.854747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.854779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.855169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.855389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.855418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.855805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.856219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.856248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.856519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.856936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.856966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.857347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.857736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.297 [2024-05-15 17:13:13.857766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.297 qpair failed and we were unable to recover it. 00:28:35.297 [2024-05-15 17:13:13.858158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.858413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.858442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.858805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.859056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.859083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.859469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.859743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.859773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.860188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.860575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.860607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.860858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.861124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.861154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.861516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.861801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.861834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.862215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 Malloc0 00:28:35.298 [2024-05-15 17:13:13.862638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.862669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.862909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.298 [2024-05-15 17:13:13.863313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.863341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:35.298 [2024-05-15 17:13:13.863723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.298 [2024-05-15 17:13:13.863839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.863866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.298 [2024-05-15 17:13:13.864220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.864586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.864615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.864878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.865113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.865143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.865529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.865954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.865984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.866350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.866734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.866766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.867145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.867527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.867568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.867869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.868204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.868233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.868516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.868896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.868926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.869293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.869655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.869685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.869752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.298 [2024-05-15 17:13:13.870072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.870439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.870466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.870702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.871127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.871156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.871398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.871796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.871825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.872219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.872593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.872622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.872743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.873083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.873113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.873491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.873870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.873901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.874281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.874423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.874455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.874874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.875249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.875280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.875554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.875987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.876016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.876264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.876500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.876530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.298 [2024-05-15 17:13:13.876773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.877152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.298 [2024-05-15 17:13:13.877181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.298 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.877457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.877840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.877871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.878255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.878514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.878541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.299 [2024-05-15 17:13:13.879009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.879265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.299 [2024-05-15 17:13:13.879295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.299 [2024-05-15 17:13:13.879771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.299 [2024-05-15 17:13:13.880002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.880030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.880452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.880829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.880858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.881253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.881505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.881533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.881963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.882348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.882376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.882800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.883054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.883082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.883477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.883876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.883905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.884299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.884655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.884684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.885084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.885457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.885487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.885730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.886116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.886147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.886387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.886767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.886796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.887194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.887569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.887599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.887826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.888224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.888254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.888517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.888902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.888932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.889169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.889578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.889609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.889985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.890369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.890397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.890631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.299 [2024-05-15 17:13:13.890848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.890878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:35.299 [2024-05-15 17:13:13.891234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.299 [2024-05-15 17:13:13.891628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.891658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.299 [2024-05-15 17:13:13.892072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.892448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.892477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.892884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.893099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.893127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.893354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.893738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.893769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.894060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.894313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.894342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.894729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.895103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.895138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.895539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.895792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.895821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.896141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.896493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.299 [2024-05-15 17:13:13.896521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.299 qpair failed and we were unable to recover it. 00:28:35.299 [2024-05-15 17:13:13.896923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.897307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.897336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.897705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.898109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.898137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.898497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.898885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.898914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.899302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.899692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.899722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.899956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.900322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.900351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.900727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.900953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.900983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.901334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.901712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.901742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.902106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.902494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.902529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.300 [2024-05-15 17:13:13.902937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.903159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.903187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.300 [2024-05-15 17:13:13.903572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.300 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.300 [2024-05-15 17:13:13.903982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.904011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.904444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.904723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.904755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.905131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.905508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.905536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.905800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.906036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.906066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.906456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.906717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.906746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.907144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.907524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.907564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.907798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.908211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.908240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.908627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.909051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.909080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.909512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.909846] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:35.300 [2024-05-15 17:13:13.909954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.300 [2024-05-15 17:13:13.909982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f720c000b90 with addr=10.0.0.2, port=4420 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.910173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.300 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.300 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:35.300 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.300 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:35.300 [2024-05-15 17:13:13.920568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.300 [2024-05-15 17:13:13.920733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.300 [2024-05-15 17:13:13.920787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.300 [2024-05-15 17:13:13.920811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.300 [2024-05-15 17:13:13.920832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.300 [2024-05-15 17:13:13.920887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.300 17:13:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1645785 00:28:35.300 [2024-05-15 17:13:13.930520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.300 [2024-05-15 17:13:13.930667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.300 [2024-05-15 17:13:13.930706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.300 [2024-05-15 17:13:13.930723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.300 [2024-05-15 17:13:13.930737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.300 [2024-05-15 17:13:13.930772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.940484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.300 [2024-05-15 17:13:13.940580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.300 [2024-05-15 17:13:13.940611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.300 [2024-05-15 17:13:13.940630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.300 [2024-05-15 17:13:13.940642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.300 [2024-05-15 17:13:13.940669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.300 qpair failed and we were unable to recover it. 00:28:35.300 [2024-05-15 17:13:13.950429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.300 [2024-05-15 17:13:13.950513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.300 [2024-05-15 17:13:13.950537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.300 [2024-05-15 17:13:13.950554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.301 [2024-05-15 17:13:13.950563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.301 [2024-05-15 17:13:13.950585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.301 qpair failed and we were unable to recover it. 00:28:35.301 [2024-05-15 17:13:13.960461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.301 [2024-05-15 17:13:13.960557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.301 [2024-05-15 17:13:13.960587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.301 [2024-05-15 17:13:13.960596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.301 [2024-05-15 17:13:13.960603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.301 [2024-05-15 17:13:13.960624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.301 qpair failed and we were unable to recover it. 00:28:35.301 [2024-05-15 17:13:13.970366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.301 [2024-05-15 17:13:13.970441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.301 [2024-05-15 17:13:13.970469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.301 [2024-05-15 17:13:13.970479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.301 [2024-05-15 17:13:13.970486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.301 [2024-05-15 17:13:13.970505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.301 qpair failed and we were unable to recover it. 00:28:35.301 [2024-05-15 17:13:13.980487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.301 [2024-05-15 17:13:13.980565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.301 [2024-05-15 17:13:13.980590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.301 [2024-05-15 17:13:13.980600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.301 [2024-05-15 17:13:13.980608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.301 [2024-05-15 17:13:13.980626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.301 qpair failed and we were unable to recover it. 00:28:35.301 [2024-05-15 17:13:13.990540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.301 [2024-05-15 17:13:13.990619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.301 [2024-05-15 17:13:13.990642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.301 [2024-05-15 17:13:13.990651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.301 [2024-05-15 17:13:13.990658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.301 [2024-05-15 17:13:13.990678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.301 qpair failed and we were unable to recover it. 00:28:35.301 [2024-05-15 17:13:14.000647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.301 [2024-05-15 17:13:14.000749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.301 [2024-05-15 17:13:14.000773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.301 [2024-05-15 17:13:14.000781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.301 [2024-05-15 17:13:14.000789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.301 [2024-05-15 17:13:14.000808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.301 qpair failed and we were unable to recover it. 00:28:35.301 [2024-05-15 17:13:14.010566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.301 [2024-05-15 17:13:14.010634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.301 [2024-05-15 17:13:14.010657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.301 [2024-05-15 17:13:14.010665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.301 [2024-05-15 17:13:14.010672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.301 [2024-05-15 17:13:14.010691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.301 qpair failed and we were unable to recover it. 00:28:35.301 [2024-05-15 17:13:14.020586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.301 [2024-05-15 17:13:14.020659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.301 [2024-05-15 17:13:14.020683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.301 [2024-05-15 17:13:14.020691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.301 [2024-05-15 17:13:14.020699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.301 [2024-05-15 17:13:14.020717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.301 qpair failed and we were unable to recover it. 00:28:35.301 [2024-05-15 17:13:14.030631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.301 [2024-05-15 17:13:14.030702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.301 [2024-05-15 17:13:14.030725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.301 [2024-05-15 17:13:14.030739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.301 [2024-05-15 17:13:14.030747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.301 [2024-05-15 17:13:14.030766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.301 qpair failed and we were unable to recover it. 00:28:35.301 [2024-05-15 17:13:14.040681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.301 [2024-05-15 17:13:14.040769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.301 [2024-05-15 17:13:14.040791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.301 [2024-05-15 17:13:14.040800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.301 [2024-05-15 17:13:14.040808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.301 [2024-05-15 17:13:14.040826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.301 qpair failed and we were unable to recover it. 00:28:35.301 [2024-05-15 17:13:14.050720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.301 [2024-05-15 17:13:14.050797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.301 [2024-05-15 17:13:14.050819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.301 [2024-05-15 17:13:14.050827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.301 [2024-05-15 17:13:14.050835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.301 [2024-05-15 17:13:14.050853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.301 qpair failed and we were unable to recover it. 00:28:35.301 [2024-05-15 17:13:14.060756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.301 [2024-05-15 17:13:14.060830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.301 [2024-05-15 17:13:14.060852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.301 [2024-05-15 17:13:14.060860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.302 [2024-05-15 17:13:14.060869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.302 [2024-05-15 17:13:14.060886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.302 qpair failed and we were unable to recover it. 00:28:35.302 [2024-05-15 17:13:14.070685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.302 [2024-05-15 17:13:14.070760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.302 [2024-05-15 17:13:14.070783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.302 [2024-05-15 17:13:14.070792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.302 [2024-05-15 17:13:14.070801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.302 [2024-05-15 17:13:14.070819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.302 qpair failed and we were unable to recover it. 00:28:35.302 [2024-05-15 17:13:14.080851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.302 [2024-05-15 17:13:14.080937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.302 [2024-05-15 17:13:14.080959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.302 [2024-05-15 17:13:14.080967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.302 [2024-05-15 17:13:14.080975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.302 [2024-05-15 17:13:14.080993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.302 qpair failed and we were unable to recover it. 00:28:35.302 [2024-05-15 17:13:14.090888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.302 [2024-05-15 17:13:14.090954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.302 [2024-05-15 17:13:14.090977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.302 [2024-05-15 17:13:14.090985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.302 [2024-05-15 17:13:14.090993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.302 [2024-05-15 17:13:14.091011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.302 qpair failed and we were unable to recover it. 00:28:35.302 [2024-05-15 17:13:14.100815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.302 [2024-05-15 17:13:14.100890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.302 [2024-05-15 17:13:14.100913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.302 [2024-05-15 17:13:14.100921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.302 [2024-05-15 17:13:14.100929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.302 [2024-05-15 17:13:14.100947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.302 qpair failed and we were unable to recover it. 00:28:35.302 [2024-05-15 17:13:14.110888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.302 [2024-05-15 17:13:14.110963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.302 [2024-05-15 17:13:14.110984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.302 [2024-05-15 17:13:14.110993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.302 [2024-05-15 17:13:14.111001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.302 [2024-05-15 17:13:14.111019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.302 qpair failed and we were unable to recover it. 00:28:35.302 [2024-05-15 17:13:14.120835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.302 [2024-05-15 17:13:14.120914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.302 [2024-05-15 17:13:14.120951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.302 [2024-05-15 17:13:14.120961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.302 [2024-05-15 17:13:14.120968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.302 [2024-05-15 17:13:14.120989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.302 qpair failed and we were unable to recover it. 00:28:35.568 [2024-05-15 17:13:14.130858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.130956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.130982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.130991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.130998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.131017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.140969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.141042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.141065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.141074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.141082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.141100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.151013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.151133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.151155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.151163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.151170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.151188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.161067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.161147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.161170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.161178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.161185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.161210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.171109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.171176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.171197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.171206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.171212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.171231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.181127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.181205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.181227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.181235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.181243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.181261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.191283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.191390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.191413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.191421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.191428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.191445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.201270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.201357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.201378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.201387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.201395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.201414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.211337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.211425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.211454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.211463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.211470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.211488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.221308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.221384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.221406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.221414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.221422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.221439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.231263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.231336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.231358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.231366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.231373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.231392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.241331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.241404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.241425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.241434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.241441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.241459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.251359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.251432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.251453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.251462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.251476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.569 [2024-05-15 17:13:14.251495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.569 qpair failed and we were unable to recover it. 00:28:35.569 [2024-05-15 17:13:14.261368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.569 [2024-05-15 17:13:14.261443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.569 [2024-05-15 17:13:14.261465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.569 [2024-05-15 17:13:14.261473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.569 [2024-05-15 17:13:14.261480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.261498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.271417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.271498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.271521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.271529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.271535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.271558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.281431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.281522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.281552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.281560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.281567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.281585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.291326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.291390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.291415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.291423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.291431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.291450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.301486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.301570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.301594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.301602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.301609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.301629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.311525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.311602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.311624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.311633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.311640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.311658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.321544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.321630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.321652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.321660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.321667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.321685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.331507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.331626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.331649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.331659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.331667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.331685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.341610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.341684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.341707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.341721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.341728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.341746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.351675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.351745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.351769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.351778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.351784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.351804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.361715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.361820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.361843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.361852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.361860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.361877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.371744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.371815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.371838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.371846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.371852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.371870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.381783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.381880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.381902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.381910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.381918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.381936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.570 [2024-05-15 17:13:14.391789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.570 [2024-05-15 17:13:14.391858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.570 [2024-05-15 17:13:14.391881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.570 [2024-05-15 17:13:14.391889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.570 [2024-05-15 17:13:14.391896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.570 [2024-05-15 17:13:14.391914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.570 qpair failed and we were unable to recover it. 00:28:35.879 [2024-05-15 17:13:14.401833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.879 [2024-05-15 17:13:14.401917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.879 [2024-05-15 17:13:14.401939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.879 [2024-05-15 17:13:14.401948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.879 [2024-05-15 17:13:14.401956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.879 [2024-05-15 17:13:14.401975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.879 qpair failed and we were unable to recover it. 00:28:35.879 [2024-05-15 17:13:14.411856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.879 [2024-05-15 17:13:14.411931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.879 [2024-05-15 17:13:14.411954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.879 [2024-05-15 17:13:14.411963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.879 [2024-05-15 17:13:14.411971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.879 [2024-05-15 17:13:14.411989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.879 qpair failed and we were unable to recover it. 00:28:35.879 [2024-05-15 17:13:14.421889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.879 [2024-05-15 17:13:14.421967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.879 [2024-05-15 17:13:14.421990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.879 [2024-05-15 17:13:14.421998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.879 [2024-05-15 17:13:14.422007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.879 [2024-05-15 17:13:14.422024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.879 qpair failed and we were unable to recover it. 00:28:35.879 [2024-05-15 17:13:14.431910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.879 [2024-05-15 17:13:14.431982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.879 [2024-05-15 17:13:14.432004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.879 [2024-05-15 17:13:14.432019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.879 [2024-05-15 17:13:14.432026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.879 [2024-05-15 17:13:14.432046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.879 qpair failed and we were unable to recover it. 00:28:35.879 [2024-05-15 17:13:14.441945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.879 [2024-05-15 17:13:14.442029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.879 [2024-05-15 17:13:14.442052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.879 [2024-05-15 17:13:14.442060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.879 [2024-05-15 17:13:14.442067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.879 [2024-05-15 17:13:14.442085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.879 qpair failed and we were unable to recover it. 00:28:35.879 [2024-05-15 17:13:14.451951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.879 [2024-05-15 17:13:14.452020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.879 [2024-05-15 17:13:14.452042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.879 [2024-05-15 17:13:14.452050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.879 [2024-05-15 17:13:14.452058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.879 [2024-05-15 17:13:14.452075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.879 qpair failed and we were unable to recover it. 00:28:35.879 [2024-05-15 17:13:14.461912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.879 [2024-05-15 17:13:14.461992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.879 [2024-05-15 17:13:14.462016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.879 [2024-05-15 17:13:14.462024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.879 [2024-05-15 17:13:14.462031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.879 [2024-05-15 17:13:14.462050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.879 qpair failed and we were unable to recover it. 00:28:35.879 [2024-05-15 17:13:14.472101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.879 [2024-05-15 17:13:14.472175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.879 [2024-05-15 17:13:14.472198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.879 [2024-05-15 17:13:14.472206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.879 [2024-05-15 17:13:14.472213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.879 [2024-05-15 17:13:14.472230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.482064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.482159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.482198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.482208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.482215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.482239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.492049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.492122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.492160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.492170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.492178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.492202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.501989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.502057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.502083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.502091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.502098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.502119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.512137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.512216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.512239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.512248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.512254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.512273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.522152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.522262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.522291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.522300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.522308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.522326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.532138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.532260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.532286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.532295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.532302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.532320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.542111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.542183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.542206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.542214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.542221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.542239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.552252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.552337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.552359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.552368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.552374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.552393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.562293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.562373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.562396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.562404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.562413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.562439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.572345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.572458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.572482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.572491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.572498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.572517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.582368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.582442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.582464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.582473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.582481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.582499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.592413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.880 [2024-05-15 17:13:14.592484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.880 [2024-05-15 17:13:14.592508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.880 [2024-05-15 17:13:14.592516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.880 [2024-05-15 17:13:14.592523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.880 [2024-05-15 17:13:14.592542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.880 qpair failed and we were unable to recover it. 00:28:35.880 [2024-05-15 17:13:14.602435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.881 [2024-05-15 17:13:14.602514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.881 [2024-05-15 17:13:14.602537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.881 [2024-05-15 17:13:14.602552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.881 [2024-05-15 17:13:14.602560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.881 [2024-05-15 17:13:14.602578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.881 qpair failed and we were unable to recover it. 00:28:35.881 [2024-05-15 17:13:14.612439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.881 [2024-05-15 17:13:14.612509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.881 [2024-05-15 17:13:14.612539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.881 [2024-05-15 17:13:14.612556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.881 [2024-05-15 17:13:14.612563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.881 [2024-05-15 17:13:14.612582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.881 qpair failed and we were unable to recover it. 00:28:35.881 [2024-05-15 17:13:14.622512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.881 [2024-05-15 17:13:14.622639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.881 [2024-05-15 17:13:14.622663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.881 [2024-05-15 17:13:14.622673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.881 [2024-05-15 17:13:14.622679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.881 [2024-05-15 17:13:14.622698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.881 qpair failed and we were unable to recover it. 00:28:35.881 [2024-05-15 17:13:14.632510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.881 [2024-05-15 17:13:14.632588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.881 [2024-05-15 17:13:14.632611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.881 [2024-05-15 17:13:14.632619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.881 [2024-05-15 17:13:14.632628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.881 [2024-05-15 17:13:14.632649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.881 qpair failed and we were unable to recover it. 00:28:35.881 [2024-05-15 17:13:14.642581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.881 [2024-05-15 17:13:14.642656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.881 [2024-05-15 17:13:14.642678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.881 [2024-05-15 17:13:14.642687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.881 [2024-05-15 17:13:14.642694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.881 [2024-05-15 17:13:14.642712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.881 qpair failed and we were unable to recover it. 00:28:35.881 [2024-05-15 17:13:14.652577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.881 [2024-05-15 17:13:14.652647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.881 [2024-05-15 17:13:14.652669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.881 [2024-05-15 17:13:14.652678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.881 [2024-05-15 17:13:14.652693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.881 [2024-05-15 17:13:14.652711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.881 qpair failed and we were unable to recover it. 00:28:35.881 [2024-05-15 17:13:14.662674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.881 [2024-05-15 17:13:14.662801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.881 [2024-05-15 17:13:14.662824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.881 [2024-05-15 17:13:14.662832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.881 [2024-05-15 17:13:14.662839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.881 [2024-05-15 17:13:14.662858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.881 qpair failed and we were unable to recover it. 00:28:35.881 [2024-05-15 17:13:14.672636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.881 [2024-05-15 17:13:14.672706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.881 [2024-05-15 17:13:14.672728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.881 [2024-05-15 17:13:14.672737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.881 [2024-05-15 17:13:14.672743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.881 [2024-05-15 17:13:14.672763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.881 qpair failed and we were unable to recover it. 00:28:35.881 [2024-05-15 17:13:14.682671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.881 [2024-05-15 17:13:14.682792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.881 [2024-05-15 17:13:14.682814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.881 [2024-05-15 17:13:14.682822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.881 [2024-05-15 17:13:14.682829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.881 [2024-05-15 17:13:14.682848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.881 qpair failed and we were unable to recover it. 00:28:35.881 [2024-05-15 17:13:14.692703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.881 [2024-05-15 17:13:14.692781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.881 [2024-05-15 17:13:14.692803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.881 [2024-05-15 17:13:14.692811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.881 [2024-05-15 17:13:14.692818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.881 [2024-05-15 17:13:14.692837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.881 qpair failed and we were unable to recover it. 00:28:35.881 [2024-05-15 17:13:14.702735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.881 [2024-05-15 17:13:14.702812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.881 [2024-05-15 17:13:14.702840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.881 [2024-05-15 17:13:14.702849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.881 [2024-05-15 17:13:14.702858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:35.881 [2024-05-15 17:13:14.702877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:35.881 qpair failed and we were unable to recover it. 00:28:36.145 [2024-05-15 17:13:14.712741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.145 [2024-05-15 17:13:14.712817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.145 [2024-05-15 17:13:14.712841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.145 [2024-05-15 17:13:14.712850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.145 [2024-05-15 17:13:14.712858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.145 [2024-05-15 17:13:14.712876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.145 qpair failed and we were unable to recover it. 00:28:36.145 [2024-05-15 17:13:14.722794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.145 [2024-05-15 17:13:14.722889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.145 [2024-05-15 17:13:14.722911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.145 [2024-05-15 17:13:14.722919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.145 [2024-05-15 17:13:14.722927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.145 [2024-05-15 17:13:14.722944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.145 qpair failed and we were unable to recover it. 00:28:36.145 [2024-05-15 17:13:14.732865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.145 [2024-05-15 17:13:14.732988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.145 [2024-05-15 17:13:14.733011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.145 [2024-05-15 17:13:14.733019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.145 [2024-05-15 17:13:14.733026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.145 [2024-05-15 17:13:14.733044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.145 qpair failed and we were unable to recover it. 00:28:36.145 [2024-05-15 17:13:14.742879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.145 [2024-05-15 17:13:14.742956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.145 [2024-05-15 17:13:14.742978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.145 [2024-05-15 17:13:14.742986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.145 [2024-05-15 17:13:14.743001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.145 [2024-05-15 17:13:14.743019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.145 qpair failed and we were unable to recover it. 00:28:36.145 [2024-05-15 17:13:14.752905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.145 [2024-05-15 17:13:14.752979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.145 [2024-05-15 17:13:14.753001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.145 [2024-05-15 17:13:14.753009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.145 [2024-05-15 17:13:14.753016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.145 [2024-05-15 17:13:14.753035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.145 qpair failed and we were unable to recover it. 00:28:36.145 [2024-05-15 17:13:14.762909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.145 [2024-05-15 17:13:14.763043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.145 [2024-05-15 17:13:14.763066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.145 [2024-05-15 17:13:14.763075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.145 [2024-05-15 17:13:14.763082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.145 [2024-05-15 17:13:14.763100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.145 qpair failed and we were unable to recover it. 00:28:36.145 [2024-05-15 17:13:14.772951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.145 [2024-05-15 17:13:14.773025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.145 [2024-05-15 17:13:14.773053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.145 [2024-05-15 17:13:14.773061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.145 [2024-05-15 17:13:14.773069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.145 [2024-05-15 17:13:14.773088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.145 qpair failed and we were unable to recover it. 00:28:36.145 [2024-05-15 17:13:14.782967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.145 [2024-05-15 17:13:14.783035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.145 [2024-05-15 17:13:14.783058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.145 [2024-05-15 17:13:14.783066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.145 [2024-05-15 17:13:14.783073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.145 [2024-05-15 17:13:14.783091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.145 qpair failed and we were unable to recover it. 00:28:36.145 [2024-05-15 17:13:14.793011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.145 [2024-05-15 17:13:14.793083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.145 [2024-05-15 17:13:14.793107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.145 [2024-05-15 17:13:14.793115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.145 [2024-05-15 17:13:14.793122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.145 [2024-05-15 17:13:14.793142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.145 qpair failed and we were unable to recover it. 00:28:36.145 [2024-05-15 17:13:14.803032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.145 [2024-05-15 17:13:14.803126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.145 [2024-05-15 17:13:14.803149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.803158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.803165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.803183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.812944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.813020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.813044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.813052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.813060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.813078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.823100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.823175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.823197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.823206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.823214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.823232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.833132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.833200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.833224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.833238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.833244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.833263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.843186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.843282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.843321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.843331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.843338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.843363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.853194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.853270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.853310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.853321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.853329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.853352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.863218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.863296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.863333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.863343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.863351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.863377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.873279] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.873363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.873390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.873398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.873405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.873425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.883313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.883406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.883445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.883455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.883462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.883486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.893309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.893377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.893403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.893411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.893418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.893440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.903387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.903464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.903487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.903496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.903504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.903522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.913392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.913511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.913536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.913553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.913561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.913583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.923394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.923481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.923511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.923521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.923528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.923559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.933437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.933507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.933530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.933538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.933553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.933571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.943388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.943468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.146 [2024-05-15 17:13:14.943491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.146 [2024-05-15 17:13:14.943500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.146 [2024-05-15 17:13:14.943507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.146 [2024-05-15 17:13:14.943526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.146 qpair failed and we were unable to recover it. 00:28:36.146 [2024-05-15 17:13:14.953518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.146 [2024-05-15 17:13:14.953608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.147 [2024-05-15 17:13:14.953632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.147 [2024-05-15 17:13:14.953640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.147 [2024-05-15 17:13:14.953648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.147 [2024-05-15 17:13:14.953667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.147 qpair failed and we were unable to recover it. 00:28:36.147 [2024-05-15 17:13:14.963456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.147 [2024-05-15 17:13:14.963554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.147 [2024-05-15 17:13:14.963577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.147 [2024-05-15 17:13:14.963586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.147 [2024-05-15 17:13:14.963593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.147 [2024-05-15 17:13:14.963618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.147 qpair failed and we were unable to recover it. 00:28:36.147 [2024-05-15 17:13:14.973560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.147 [2024-05-15 17:13:14.973639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.147 [2024-05-15 17:13:14.973662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.147 [2024-05-15 17:13:14.973671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.147 [2024-05-15 17:13:14.973679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.147 [2024-05-15 17:13:14.973698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.147 qpair failed and we were unable to recover it. 00:28:36.410 [2024-05-15 17:13:14.983600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.410 [2024-05-15 17:13:14.983673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.410 [2024-05-15 17:13:14.983696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.410 [2024-05-15 17:13:14.983705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.410 [2024-05-15 17:13:14.983712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.410 [2024-05-15 17:13:14.983729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.410 qpair failed and we were unable to recover it. 00:28:36.410 [2024-05-15 17:13:14.993635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.410 [2024-05-15 17:13:14.993706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.410 [2024-05-15 17:13:14.993729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.410 [2024-05-15 17:13:14.993738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.410 [2024-05-15 17:13:14.993744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.410 [2024-05-15 17:13:14.993762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.410 qpair failed and we were unable to recover it. 00:28:36.410 [2024-05-15 17:13:15.003673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.410 [2024-05-15 17:13:15.003765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.410 [2024-05-15 17:13:15.003787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.410 [2024-05-15 17:13:15.003796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.410 [2024-05-15 17:13:15.003803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.410 [2024-05-15 17:13:15.003821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.410 qpair failed and we were unable to recover it. 00:28:36.410 [2024-05-15 17:13:15.013685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.410 [2024-05-15 17:13:15.013810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.410 [2024-05-15 17:13:15.013840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.410 [2024-05-15 17:13:15.013848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.410 [2024-05-15 17:13:15.013855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.410 [2024-05-15 17:13:15.013873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.410 qpair failed and we were unable to recover it. 00:28:36.410 [2024-05-15 17:13:15.023747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.411 [2024-05-15 17:13:15.023814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.411 [2024-05-15 17:13:15.023837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.411 [2024-05-15 17:13:15.023846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.411 [2024-05-15 17:13:15.023854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.411 [2024-05-15 17:13:15.023872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.411 qpair failed and we were unable to recover it. 00:28:36.411 [2024-05-15 17:13:15.033750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.411 [2024-05-15 17:13:15.033849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.411 [2024-05-15 17:13:15.033872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.411 [2024-05-15 17:13:15.033880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.411 [2024-05-15 17:13:15.033887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.411 [2024-05-15 17:13:15.033905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.411 qpair failed and we were unable to recover it. 00:28:36.411 [2024-05-15 17:13:15.043792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.411 [2024-05-15 17:13:15.043880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.411 [2024-05-15 17:13:15.043902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.411 [2024-05-15 17:13:15.043910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.411 [2024-05-15 17:13:15.043919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.411 [2024-05-15 17:13:15.043936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.411 qpair failed and we were unable to recover it. 00:28:36.411 [2024-05-15 17:13:15.053702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.411 [2024-05-15 17:13:15.053774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.411 [2024-05-15 17:13:15.053796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.411 [2024-05-15 17:13:15.053805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.411 [2024-05-15 17:13:15.053819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.411 [2024-05-15 17:13:15.053836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.411 qpair failed and we were unable to recover it. 00:28:36.411 [2024-05-15 17:13:15.063892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.411 [2024-05-15 17:13:15.063999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.411 [2024-05-15 17:13:15.064023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.411 [2024-05-15 17:13:15.064031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.411 [2024-05-15 17:13:15.064038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.411 [2024-05-15 17:13:15.064057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.411 qpair failed and we were unable to recover it. 00:28:36.411 [2024-05-15 17:13:15.073873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.411 [2024-05-15 17:13:15.073945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.411 [2024-05-15 17:13:15.073967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.411 [2024-05-15 17:13:15.073976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.411 [2024-05-15 17:13:15.073983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.411 [2024-05-15 17:13:15.074000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.411 qpair failed and we were unable to recover it. 00:28:36.411 [2024-05-15 17:13:15.083811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.411 [2024-05-15 17:13:15.083936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.411 [2024-05-15 17:13:15.083962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.411 [2024-05-15 17:13:15.083971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.411 [2024-05-15 17:13:15.083979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.411 [2024-05-15 17:13:15.083999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.411 qpair failed and we were unable to recover it. 00:28:36.411 [2024-05-15 17:13:15.093928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.411 [2024-05-15 17:13:15.094000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.411 [2024-05-15 17:13:15.094024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.411 [2024-05-15 17:13:15.094033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.411 [2024-05-15 17:13:15.094040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.411 [2024-05-15 17:13:15.094059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.411 qpair failed and we were unable to recover it. 00:28:36.411 [2024-05-15 17:13:15.103976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.411 [2024-05-15 17:13:15.104053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.411 [2024-05-15 17:13:15.104076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.411 [2024-05-15 17:13:15.104085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.411 [2024-05-15 17:13:15.104092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.411 [2024-05-15 17:13:15.104112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.411 qpair failed and we were unable to recover it. 00:28:36.411 [2024-05-15 17:13:15.113994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.411 [2024-05-15 17:13:15.114067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.411 [2024-05-15 17:13:15.114090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.411 [2024-05-15 17:13:15.114099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.411 [2024-05-15 17:13:15.114106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.411 [2024-05-15 17:13:15.114125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.411 qpair failed and we were unable to recover it. 00:28:36.411 [2024-05-15 17:13:15.123998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.411 [2024-05-15 17:13:15.124087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.411 [2024-05-15 17:13:15.124110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.411 [2024-05-15 17:13:15.124118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.411 [2024-05-15 17:13:15.124127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.412 [2024-05-15 17:13:15.124144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.412 qpair failed and we were unable to recover it. 00:28:36.412 [2024-05-15 17:13:15.134010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.412 [2024-05-15 17:13:15.134119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.412 [2024-05-15 17:13:15.134142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.412 [2024-05-15 17:13:15.134151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.412 [2024-05-15 17:13:15.134158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.412 [2024-05-15 17:13:15.134175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.412 qpair failed and we were unable to recover it. 00:28:36.412 [2024-05-15 17:13:15.143953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.412 [2024-05-15 17:13:15.144027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.412 [2024-05-15 17:13:15.144049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.412 [2024-05-15 17:13:15.144057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.412 [2024-05-15 17:13:15.144072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.412 [2024-05-15 17:13:15.144089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.412 qpair failed and we were unable to recover it. 00:28:36.412 [2024-05-15 17:13:15.154150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.412 [2024-05-15 17:13:15.154225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.412 [2024-05-15 17:13:15.154248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.412 [2024-05-15 17:13:15.154257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.412 [2024-05-15 17:13:15.154267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.412 [2024-05-15 17:13:15.154285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.412 qpair failed and we were unable to recover it. 00:28:36.412 [2024-05-15 17:13:15.164143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.412 [2024-05-15 17:13:15.164237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.412 [2024-05-15 17:13:15.164276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.412 [2024-05-15 17:13:15.164286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.412 [2024-05-15 17:13:15.164294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.412 [2024-05-15 17:13:15.164319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.412 qpair failed and we were unable to recover it. 00:28:36.412 [2024-05-15 17:13:15.174151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.412 [2024-05-15 17:13:15.174232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.412 [2024-05-15 17:13:15.174271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.412 [2024-05-15 17:13:15.174281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.412 [2024-05-15 17:13:15.174290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.412 [2024-05-15 17:13:15.174314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.412 qpair failed and we were unable to recover it. 00:28:36.412 [2024-05-15 17:13:15.184188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.412 [2024-05-15 17:13:15.184272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.412 [2024-05-15 17:13:15.184298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.412 [2024-05-15 17:13:15.184308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.412 [2024-05-15 17:13:15.184315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.412 [2024-05-15 17:13:15.184335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.412 qpair failed and we were unable to recover it. 00:28:36.412 [2024-05-15 17:13:15.194253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.412 [2024-05-15 17:13:15.194324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.412 [2024-05-15 17:13:15.194349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.412 [2024-05-15 17:13:15.194358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.412 [2024-05-15 17:13:15.194364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.412 [2024-05-15 17:13:15.194385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.412 qpair failed and we were unable to recover it. 00:28:36.412 [2024-05-15 17:13:15.204188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.412 [2024-05-15 17:13:15.204273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.412 [2024-05-15 17:13:15.204297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.412 [2024-05-15 17:13:15.204305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.412 [2024-05-15 17:13:15.204312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.412 [2024-05-15 17:13:15.204331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.412 qpair failed and we were unable to recover it. 00:28:36.412 [2024-05-15 17:13:15.214242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.412 [2024-05-15 17:13:15.214320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.412 [2024-05-15 17:13:15.214344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.412 [2024-05-15 17:13:15.214353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.412 [2024-05-15 17:13:15.214360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.412 [2024-05-15 17:13:15.214379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.412 qpair failed and we were unable to recover it. 00:28:36.412 [2024-05-15 17:13:15.224325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.412 [2024-05-15 17:13:15.224412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.412 [2024-05-15 17:13:15.224435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.412 [2024-05-15 17:13:15.224444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.412 [2024-05-15 17:13:15.224451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.412 [2024-05-15 17:13:15.224470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.412 qpair failed and we were unable to recover it. 00:28:36.412 [2024-05-15 17:13:15.234356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.412 [2024-05-15 17:13:15.234435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.412 [2024-05-15 17:13:15.234459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.412 [2024-05-15 17:13:15.234475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.413 [2024-05-15 17:13:15.234482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.413 [2024-05-15 17:13:15.234499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.413 qpair failed and we were unable to recover it. 00:28:36.676 [2024-05-15 17:13:15.244382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.676 [2024-05-15 17:13:15.244467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.676 [2024-05-15 17:13:15.244491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.676 [2024-05-15 17:13:15.244499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.676 [2024-05-15 17:13:15.244508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.676 [2024-05-15 17:13:15.244526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.676 qpair failed and we were unable to recover it. 00:28:36.676 [2024-05-15 17:13:15.254437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.676 [2024-05-15 17:13:15.254510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.676 [2024-05-15 17:13:15.254534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.676 [2024-05-15 17:13:15.254542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.676 [2024-05-15 17:13:15.254557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.676 [2024-05-15 17:13:15.254575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.676 qpair failed and we were unable to recover it. 00:28:36.676 [2024-05-15 17:13:15.264435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.676 [2024-05-15 17:13:15.264510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.676 [2024-05-15 17:13:15.264533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.676 [2024-05-15 17:13:15.264541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.676 [2024-05-15 17:13:15.264560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.676 [2024-05-15 17:13:15.264579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.676 qpair failed and we were unable to recover it. 00:28:36.676 [2024-05-15 17:13:15.274457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.676 [2024-05-15 17:13:15.274531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.676 [2024-05-15 17:13:15.274560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.676 [2024-05-15 17:13:15.274569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.676 [2024-05-15 17:13:15.274577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.676 [2024-05-15 17:13:15.274594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.676 qpair failed and we were unable to recover it. 00:28:36.676 [2024-05-15 17:13:15.284549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.676 [2024-05-15 17:13:15.284646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.676 [2024-05-15 17:13:15.284668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.676 [2024-05-15 17:13:15.284678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.676 [2024-05-15 17:13:15.284685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.676 [2024-05-15 17:13:15.284703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.676 qpair failed and we were unable to recover it. 00:28:36.676 [2024-05-15 17:13:15.294532] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.676 [2024-05-15 17:13:15.294657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.676 [2024-05-15 17:13:15.294680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.676 [2024-05-15 17:13:15.294688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.676 [2024-05-15 17:13:15.294696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.676 [2024-05-15 17:13:15.294714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.676 qpair failed and we were unable to recover it. 00:28:36.676 [2024-05-15 17:13:15.304552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.676 [2024-05-15 17:13:15.304621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.676 [2024-05-15 17:13:15.304644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.676 [2024-05-15 17:13:15.304652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.676 [2024-05-15 17:13:15.304659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.676 [2024-05-15 17:13:15.304676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.676 qpair failed and we were unable to recover it. 00:28:36.676 [2024-05-15 17:13:15.314601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.314708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.314729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.314738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.314747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.314764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.324623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.324733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.324761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.324769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.324778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.324796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.334655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.334726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.334748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.334757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.334763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.334781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.344688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.344760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.344784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.344792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.344799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.344817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.354800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.354873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.354897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.354905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.354913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.354933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.364767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.364890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.364914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.364922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.364929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.364954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.374773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.374838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.374861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.374869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.374876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.374894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.384804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.384882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.384904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.384912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.384919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.384937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.394909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.395009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.395031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.395039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.395046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.395064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.404860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.404951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.404974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.404983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.404990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.405008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.414956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.415026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.415055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.415064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.415072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.415089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.424912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.424981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.425003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.425011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.425018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.425037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.434994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.435066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.435088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.677 [2024-05-15 17:13:15.435096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.677 [2024-05-15 17:13:15.435103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.677 [2024-05-15 17:13:15.435122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.677 qpair failed and we were unable to recover it. 00:28:36.677 [2024-05-15 17:13:15.445024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.677 [2024-05-15 17:13:15.445106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.677 [2024-05-15 17:13:15.445128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.678 [2024-05-15 17:13:15.445136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.678 [2024-05-15 17:13:15.445143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.678 [2024-05-15 17:13:15.445162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.678 qpair failed and we were unable to recover it. 00:28:36.678 [2024-05-15 17:13:15.455042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.678 [2024-05-15 17:13:15.455116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.678 [2024-05-15 17:13:15.455138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.678 [2024-05-15 17:13:15.455146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.678 [2024-05-15 17:13:15.455155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.678 [2024-05-15 17:13:15.455178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.678 qpair failed and we were unable to recover it. 00:28:36.678 [2024-05-15 17:13:15.465057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.678 [2024-05-15 17:13:15.465132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.678 [2024-05-15 17:13:15.465161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.678 [2024-05-15 17:13:15.465169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.678 [2024-05-15 17:13:15.465177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.678 [2024-05-15 17:13:15.465198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.678 qpair failed and we were unable to recover it. 00:28:36.678 [2024-05-15 17:13:15.475173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.678 [2024-05-15 17:13:15.475277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.678 [2024-05-15 17:13:15.475301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.678 [2024-05-15 17:13:15.475309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.678 [2024-05-15 17:13:15.475316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.678 [2024-05-15 17:13:15.475334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.678 qpair failed and we were unable to recover it. 00:28:36.678 [2024-05-15 17:13:15.485149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.678 [2024-05-15 17:13:15.485236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.678 [2024-05-15 17:13:15.485260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.678 [2024-05-15 17:13:15.485268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.678 [2024-05-15 17:13:15.485275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.678 [2024-05-15 17:13:15.485294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.678 qpair failed and we were unable to recover it. 00:28:36.678 [2024-05-15 17:13:15.495222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.678 [2024-05-15 17:13:15.495301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.678 [2024-05-15 17:13:15.495340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.678 [2024-05-15 17:13:15.495350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.678 [2024-05-15 17:13:15.495358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.678 [2024-05-15 17:13:15.495382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.678 qpair failed and we were unable to recover it. 00:28:36.678 [2024-05-15 17:13:15.505202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.678 [2024-05-15 17:13:15.505288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.678 [2024-05-15 17:13:15.505327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.678 [2024-05-15 17:13:15.505338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.678 [2024-05-15 17:13:15.505346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.678 [2024-05-15 17:13:15.505370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.678 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.515214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.515295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.515335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.942 [2024-05-15 17:13:15.515345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.942 [2024-05-15 17:13:15.515353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.942 [2024-05-15 17:13:15.515377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.942 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.525250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.525330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.525355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.942 [2024-05-15 17:13:15.525363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.942 [2024-05-15 17:13:15.525371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.942 [2024-05-15 17:13:15.525392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.942 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.535303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.535379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.535402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.942 [2024-05-15 17:13:15.535410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.942 [2024-05-15 17:13:15.535419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.942 [2024-05-15 17:13:15.535438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.942 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.545337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.545409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.545432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.942 [2024-05-15 17:13:15.545440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.942 [2024-05-15 17:13:15.545462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.942 [2024-05-15 17:13:15.545481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.942 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.555325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.555395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.555418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.942 [2024-05-15 17:13:15.555426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.942 [2024-05-15 17:13:15.555433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.942 [2024-05-15 17:13:15.555453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.942 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.565395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.565482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.565505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.942 [2024-05-15 17:13:15.565514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.942 [2024-05-15 17:13:15.565523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.942 [2024-05-15 17:13:15.565540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.942 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.575411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.575478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.575501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.942 [2024-05-15 17:13:15.575509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.942 [2024-05-15 17:13:15.575516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.942 [2024-05-15 17:13:15.575534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.942 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.585462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.585585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.585609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.942 [2024-05-15 17:13:15.585617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.942 [2024-05-15 17:13:15.585624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.942 [2024-05-15 17:13:15.585643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.942 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.595354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.595437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.595460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.942 [2024-05-15 17:13:15.595469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.942 [2024-05-15 17:13:15.595476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.942 [2024-05-15 17:13:15.595494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.942 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.605519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.605609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.605633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.942 [2024-05-15 17:13:15.605642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.942 [2024-05-15 17:13:15.605648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.942 [2024-05-15 17:13:15.605667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.942 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.615617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.615688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.615712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.942 [2024-05-15 17:13:15.615719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.942 [2024-05-15 17:13:15.615726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.942 [2024-05-15 17:13:15.615747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.942 qpair failed and we were unable to recover it. 00:28:36.942 [2024-05-15 17:13:15.625554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.942 [2024-05-15 17:13:15.625642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.942 [2024-05-15 17:13:15.625665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.625672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.625680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.625700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.635606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.635674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.635697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.635711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.635718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.635738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.645655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.645739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.645762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.645770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.645778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.645796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.655642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.655713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.655740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.655749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.655757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.655777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.665565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.665642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.665666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.665675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.665683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.665702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.675721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.675796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.675818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.675827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.675835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.675854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.685765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.685844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.685867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.685875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.685883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.685902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.695798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.695873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.695895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.695903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.695911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.695929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.705819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.705893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.705914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.705922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.705931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.705949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.715818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.715895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.715916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.715925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.715934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.715951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.725869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.725980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.726004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.726019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.726026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.726044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.735873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.735951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.735973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.735982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.735990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.736007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.745905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.746027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.746049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.746058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.746065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.746082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.755949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.756023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.943 [2024-05-15 17:13:15.756047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.943 [2024-05-15 17:13:15.756055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.943 [2024-05-15 17:13:15.756062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.943 [2024-05-15 17:13:15.756081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.943 qpair failed and we were unable to recover it. 00:28:36.943 [2024-05-15 17:13:15.765999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.943 [2024-05-15 17:13:15.766120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.944 [2024-05-15 17:13:15.766143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.944 [2024-05-15 17:13:15.766153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.944 [2024-05-15 17:13:15.766160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:36.944 [2024-05-15 17:13:15.766177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.944 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.776007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.207 [2024-05-15 17:13:15.776083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.207 [2024-05-15 17:13:15.776106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.207 [2024-05-15 17:13:15.776115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.207 [2024-05-15 17:13:15.776123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.207 [2024-05-15 17:13:15.776141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.786048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.207 [2024-05-15 17:13:15.786122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.207 [2024-05-15 17:13:15.786144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.207 [2024-05-15 17:13:15.786152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.207 [2024-05-15 17:13:15.786161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.207 [2024-05-15 17:13:15.786179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.796046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.207 [2024-05-15 17:13:15.796114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.207 [2024-05-15 17:13:15.796136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.207 [2024-05-15 17:13:15.796144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.207 [2024-05-15 17:13:15.796151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.207 [2024-05-15 17:13:15.796171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.806112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.207 [2024-05-15 17:13:15.806197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.207 [2024-05-15 17:13:15.806220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.207 [2024-05-15 17:13:15.806229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.207 [2024-05-15 17:13:15.806237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.207 [2024-05-15 17:13:15.806255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.816124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.207 [2024-05-15 17:13:15.816215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.207 [2024-05-15 17:13:15.816260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.207 [2024-05-15 17:13:15.816271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.207 [2024-05-15 17:13:15.816279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.207 [2024-05-15 17:13:15.816304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.826132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.207 [2024-05-15 17:13:15.826206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.207 [2024-05-15 17:13:15.826244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.207 [2024-05-15 17:13:15.826255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.207 [2024-05-15 17:13:15.826262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.207 [2024-05-15 17:13:15.826287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.836047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.207 [2024-05-15 17:13:15.836113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.207 [2024-05-15 17:13:15.836138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.207 [2024-05-15 17:13:15.836147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.207 [2024-05-15 17:13:15.836154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.207 [2024-05-15 17:13:15.836177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.846271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.207 [2024-05-15 17:13:15.846391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.207 [2024-05-15 17:13:15.846413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.207 [2024-05-15 17:13:15.846422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.207 [2024-05-15 17:13:15.846429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.207 [2024-05-15 17:13:15.846448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.856192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.207 [2024-05-15 17:13:15.856262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.207 [2024-05-15 17:13:15.856297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.207 [2024-05-15 17:13:15.856308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.207 [2024-05-15 17:13:15.856315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.207 [2024-05-15 17:13:15.856347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.866276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.207 [2024-05-15 17:13:15.866349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.207 [2024-05-15 17:13:15.866385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.207 [2024-05-15 17:13:15.866398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.207 [2024-05-15 17:13:15.866408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.207 [2024-05-15 17:13:15.866432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.876246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.207 [2024-05-15 17:13:15.876331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.207 [2024-05-15 17:13:15.876354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.207 [2024-05-15 17:13:15.876363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.207 [2024-05-15 17:13:15.876369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.207 [2024-05-15 17:13:15.876388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-05-15 17:13:15.886311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.886388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.886407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.886415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.886422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.886440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:15.896205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.896300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.896321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.896328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.896336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.896353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:15.906337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.906396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.906420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.906428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.906435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.906451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:15.916327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.916381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.916400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.916407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.916414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.916430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:15.926433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.926515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.926533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.926540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.926553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.926569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:15.936412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.936467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.936484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.936491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.936498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.936513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:15.946417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.946476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.946493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.946500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.946511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.946527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:15.956460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.956517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.956534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.956542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.956556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.956572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:15.966536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.966601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.966620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.966627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.966637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.966653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:15.976502] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.976555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.976573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.976580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.976587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.976602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:15.986530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.986583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.986600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.986607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.986614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.986628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:15.996557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:15.996613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:15.996629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:15.996636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.208 [2024-05-15 17:13:15.996643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.208 [2024-05-15 17:13:15.996657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-05-15 17:13:16.006597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.208 [2024-05-15 17:13:16.006685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.208 [2024-05-15 17:13:16.006701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.208 [2024-05-15 17:13:16.006709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.209 [2024-05-15 17:13:16.006715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.209 [2024-05-15 17:13:16.006730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-05-15 17:13:16.016613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.209 [2024-05-15 17:13:16.016666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.209 [2024-05-15 17:13:16.016682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.209 [2024-05-15 17:13:16.016689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.209 [2024-05-15 17:13:16.016696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.209 [2024-05-15 17:13:16.016711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-05-15 17:13:16.026679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.209 [2024-05-15 17:13:16.026734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.209 [2024-05-15 17:13:16.026749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.209 [2024-05-15 17:13:16.026757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.209 [2024-05-15 17:13:16.026763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.209 [2024-05-15 17:13:16.026778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-05-15 17:13:16.036693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.209 [2024-05-15 17:13:16.036746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.209 [2024-05-15 17:13:16.036761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.209 [2024-05-15 17:13:16.036772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.209 [2024-05-15 17:13:16.036779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.209 [2024-05-15 17:13:16.036793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.046702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.046780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.046796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.046805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.046816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.046831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.056735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.056788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.056804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.056811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.056817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.056832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.066737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.066789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.066804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.066811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.066818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.066832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.076810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.076859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.076875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.076882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.076889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.076903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.086878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.086951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.086966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.086973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.086979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.086994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.096821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.096873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.096889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.096896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.096902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.096916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.106891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.106943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.106958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.106965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.106972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.106986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.116901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.116954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.116968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.116975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.116982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.116995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.126920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.126979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.126993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.127004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.127010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.127024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.136948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.136999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.137013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.137021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.137027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.137041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.146948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.146999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.147013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.147020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.147026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.147040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.157021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.157074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.157088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.157095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.157102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.157116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.472 qpair failed and we were unable to recover it. 00:28:37.472 [2024-05-15 17:13:16.167032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.472 [2024-05-15 17:13:16.167090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.472 [2024-05-15 17:13:16.167104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.472 [2024-05-15 17:13:16.167111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.472 [2024-05-15 17:13:16.167117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.472 [2024-05-15 17:13:16.167131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.177077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.177129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.177144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.177151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.177158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.177172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.187085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.187136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.187151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.187158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.187166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.187180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.197169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.197243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.197258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.197266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.197272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.197287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.207159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.207265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.207280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.207287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.207294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.207308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.217171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.217220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.217238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.217245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.217252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.217266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.227229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.227280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.227295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.227302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.227308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.227322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.237148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.237199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.237214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.237221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.237227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.237241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.247254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.247309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.247323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.247331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.247337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.247351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.257269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.257320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.257335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.257342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.257348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.257368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.267299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.267353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.267368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.267375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.267381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.267395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.277215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.277273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.277289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.277296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.277302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.277317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.287360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.287451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.287467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.287474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.287481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.287495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.473 [2024-05-15 17:13:16.297377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.473 [2024-05-15 17:13:16.297426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.473 [2024-05-15 17:13:16.297440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.473 [2024-05-15 17:13:16.297447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.473 [2024-05-15 17:13:16.297454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.473 [2024-05-15 17:13:16.297467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.473 qpair failed and we were unable to recover it. 00:28:37.736 [2024-05-15 17:13:16.307414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.307468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.307486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.307493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.307500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.307514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.317404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.317456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.317471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.317479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.317485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.317499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.327356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.327413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.327428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.327435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.327441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.327455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.337447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.337498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.337513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.337520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.337526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.337540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.347519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.347574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.347589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.347596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.347606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.347620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.357560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.357610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.357625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.357633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.357639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.357653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.367606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.367667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.367682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.367689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.367695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.367709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.377636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.377729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.377744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.377751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.377758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.377772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.387628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.387682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.387697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.387704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.387710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.387725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.397586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.397643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.397658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.397665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.397672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.397687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.407762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.407825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.407840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.407847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.407855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.407870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.417759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.417823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.417837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.417845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.737 [2024-05-15 17:13:16.417852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.737 [2024-05-15 17:13:16.417866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.737 qpair failed and we were unable to recover it. 00:28:37.737 [2024-05-15 17:13:16.427740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.737 [2024-05-15 17:13:16.427797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.737 [2024-05-15 17:13:16.427812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.737 [2024-05-15 17:13:16.427819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.427825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.427840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.437805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.437905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.437920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.437927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.437938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.437952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.447812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.447866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.447881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.447888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.447895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.447909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.457834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.457884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.457898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.457905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.457912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.457926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.467839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.467894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.467909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.467916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.467923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.467937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.477778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.477829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.477844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.477852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.477858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.477872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.487890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.487961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.487975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.487983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.487989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.488010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.497926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.497978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.497993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.498000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.498007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.498020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.507965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.508013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.508028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.508035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.508041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.508055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.517991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.518044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.518058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.518065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.518072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.518086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.528028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.528088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.528103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.528113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.528120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.528133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.538055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.538137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.538152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.538159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.538165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.538179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.548044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.548117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.548131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.548138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.548144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.548158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.558095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.558151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.558165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.738 [2024-05-15 17:13:16.558172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.738 [2024-05-15 17:13:16.558178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.738 [2024-05-15 17:13:16.558192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.738 qpair failed and we were unable to recover it. 00:28:37.738 [2024-05-15 17:13:16.568092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.738 [2024-05-15 17:13:16.568153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.738 [2024-05-15 17:13:16.568168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.739 [2024-05-15 17:13:16.568175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.739 [2024-05-15 17:13:16.568182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:37.739 [2024-05-15 17:13:16.568196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:37.739 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.578157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.578206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.578222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.578228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.578235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.578249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.588159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.588211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.588226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.588233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.588239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.588253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.598211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.598270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.598296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.598305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.598312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.598331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.608248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.608310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.608335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.608344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.608351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.608371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.618145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.618214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.618243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.618252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.618259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.618279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.628282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.628338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.628362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.628371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.628378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.628398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.638298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.638358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.638375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.638382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.638389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.638404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.648344] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.648395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.648410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.648418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.648425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.648439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.658354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.658403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.658418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.658425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.658431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.658450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.668390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.668442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.668458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.668465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.668472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.668486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.678427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.678479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.678494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.678501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.678507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.678521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.688328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.688387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.688402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.688409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.688415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.688429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.002 [2024-05-15 17:13:16.698493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.002 [2024-05-15 17:13:16.698556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.002 [2024-05-15 17:13:16.698571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.002 [2024-05-15 17:13:16.698578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.002 [2024-05-15 17:13:16.698585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.002 [2024-05-15 17:13:16.698599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.002 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.708509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.708604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.708623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.708630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.708636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.708650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.718510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.718566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.718581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.718588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.718595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.718609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.728451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.728511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.728526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.728533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.728540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.728560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.738600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.738651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.738666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.738673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.738680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.738695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.748490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.748549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.748564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.748571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.748581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.748596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.758645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.758700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.758715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.758722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.758728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.758742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.768684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.768744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.768759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.768766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.768773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.768787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.778697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.778754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.778769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.778776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.778782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.778797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.788726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.788779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.788793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.788800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.788806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.788820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.798750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.798809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.798824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.798831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.798837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.798852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.808813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.808869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.808884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.808891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.808897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.808912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.818790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.818839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.818854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.818861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.818867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.818881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.003 [2024-05-15 17:13:16.828849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.003 [2024-05-15 17:13:16.828901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.003 [2024-05-15 17:13:16.828916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.003 [2024-05-15 17:13:16.828923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.003 [2024-05-15 17:13:16.828930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.003 [2024-05-15 17:13:16.828943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.003 qpair failed and we were unable to recover it. 00:28:38.266 [2024-05-15 17:13:16.838865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.266 [2024-05-15 17:13:16.838956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.266 [2024-05-15 17:13:16.838972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.266 [2024-05-15 17:13:16.838979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.266 [2024-05-15 17:13:16.838989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.266 [2024-05-15 17:13:16.839003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-05-15 17:13:16.848896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.266 [2024-05-15 17:13:16.848953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.266 [2024-05-15 17:13:16.848968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.266 [2024-05-15 17:13:16.848975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.266 [2024-05-15 17:13:16.848981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.266 [2024-05-15 17:13:16.848995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-05-15 17:13:16.858909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.266 [2024-05-15 17:13:16.858963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.266 [2024-05-15 17:13:16.858978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.266 [2024-05-15 17:13:16.858985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.266 [2024-05-15 17:13:16.858992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.266 [2024-05-15 17:13:16.859006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.266 [2024-05-15 17:13:16.868942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.266 [2024-05-15 17:13:16.868998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.266 [2024-05-15 17:13:16.869013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.266 [2024-05-15 17:13:16.869020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.266 [2024-05-15 17:13:16.869026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.266 [2024-05-15 17:13:16.869040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.266 qpair failed and we were unable to recover it. 00:28:38.267 [2024-05-15 17:13:16.879005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.267 [2024-05-15 17:13:16.879059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.267 [2024-05-15 17:13:16.879075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.267 [2024-05-15 17:13:16.879082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.267 [2024-05-15 17:13:16.879088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.267 [2024-05-15 17:13:16.879102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-05-15 17:13:16.889017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.267 [2024-05-15 17:13:16.889069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.267 [2024-05-15 17:13:16.889084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.267 [2024-05-15 17:13:16.889092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.267 [2024-05-15 17:13:16.889098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.267 [2024-05-15 17:13:16.889112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-05-15 17:13:16.898910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.267 [2024-05-15 17:13:16.898962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.267 [2024-05-15 17:13:16.898976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.267 [2024-05-15 17:13:16.898983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.267 [2024-05-15 17:13:16.898990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.267 [2024-05-15 17:13:16.899004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-05-15 17:13:16.909047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.267 [2024-05-15 17:13:16.909144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.267 [2024-05-15 17:13:16.909160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.267 [2024-05-15 17:13:16.909167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.267 [2024-05-15 17:13:16.909174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.267 [2024-05-15 17:13:16.909187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-05-15 17:13:16.919104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.267 [2024-05-15 17:13:16.919157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.267 [2024-05-15 17:13:16.919172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.267 [2024-05-15 17:13:16.919180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.267 [2024-05-15 17:13:16.919186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.267 [2024-05-15 17:13:16.919200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-05-15 17:13:16.929085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.267 [2024-05-15 17:13:16.929143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.267 [2024-05-15 17:13:16.929158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.267 [2024-05-15 17:13:16.929168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.267 [2024-05-15 17:13:16.929175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.267 [2024-05-15 17:13:16.929189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-05-15 17:13:16.939134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.267 [2024-05-15 17:13:16.939187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.267 [2024-05-15 17:13:16.939202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.267 [2024-05-15 17:13:16.939209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.267 [2024-05-15 17:13:16.939215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.267 [2024-05-15 17:13:16.939230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-05-15 17:13:16.949161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.267 [2024-05-15 17:13:16.949222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.267 [2024-05-15 17:13:16.949237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.267 [2024-05-15 17:13:16.949244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.267 [2024-05-15 17:13:16.949250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.267 [2024-05-15 17:13:16.949265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-05-15 17:13:16.959194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.267 [2024-05-15 17:13:16.959248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.267 [2024-05-15 17:13:16.959262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.267 [2024-05-15 17:13:16.959269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.267 [2024-05-15 17:13:16.959276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.267 [2024-05-15 17:13:16.959289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.267 qpair failed and we were unable to recover it. 00:28:38.267 [2024-05-15 17:13:16.969110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.267 [2024-05-15 17:13:16.969167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.268 [2024-05-15 17:13:16.969182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.268 [2024-05-15 17:13:16.969190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.268 [2024-05-15 17:13:16.969196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.268 [2024-05-15 17:13:16.969211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-05-15 17:13:16.979250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.268 [2024-05-15 17:13:16.979316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.268 [2024-05-15 17:13:16.979341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.268 [2024-05-15 17:13:16.979349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.268 [2024-05-15 17:13:16.979357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.268 [2024-05-15 17:13:16.979375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-05-15 17:13:16.989285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.268 [2024-05-15 17:13:16.989341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.268 [2024-05-15 17:13:16.989358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.268 [2024-05-15 17:13:16.989365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.268 [2024-05-15 17:13:16.989372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.268 [2024-05-15 17:13:16.989387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-05-15 17:13:16.999304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.268 [2024-05-15 17:13:16.999392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.268 [2024-05-15 17:13:16.999407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.268 [2024-05-15 17:13:16.999415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.268 [2024-05-15 17:13:16.999421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.268 [2024-05-15 17:13:16.999436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-05-15 17:13:17.009338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.268 [2024-05-15 17:13:17.009400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.268 [2024-05-15 17:13:17.009415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.268 [2024-05-15 17:13:17.009423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.268 [2024-05-15 17:13:17.009429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.268 [2024-05-15 17:13:17.009443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-05-15 17:13:17.019435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.268 [2024-05-15 17:13:17.019497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.268 [2024-05-15 17:13:17.019517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.268 [2024-05-15 17:13:17.019525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.268 [2024-05-15 17:13:17.019531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.268 [2024-05-15 17:13:17.019551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-05-15 17:13:17.029276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.268 [2024-05-15 17:13:17.029328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.268 [2024-05-15 17:13:17.029345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.268 [2024-05-15 17:13:17.029352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.268 [2024-05-15 17:13:17.029360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.268 [2024-05-15 17:13:17.029375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-05-15 17:13:17.039420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.268 [2024-05-15 17:13:17.039474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.268 [2024-05-15 17:13:17.039490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.268 [2024-05-15 17:13:17.039497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.268 [2024-05-15 17:13:17.039504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.268 [2024-05-15 17:13:17.039518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-05-15 17:13:17.049432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.268 [2024-05-15 17:13:17.049490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.268 [2024-05-15 17:13:17.049505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.268 [2024-05-15 17:13:17.049512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.268 [2024-05-15 17:13:17.049518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.268 [2024-05-15 17:13:17.049532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-05-15 17:13:17.059357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.268 [2024-05-15 17:13:17.059415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.268 [2024-05-15 17:13:17.059429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.268 [2024-05-15 17:13:17.059437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.268 [2024-05-15 17:13:17.059443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.268 [2024-05-15 17:13:17.059470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.268 qpair failed and we were unable to recover it. 00:28:38.268 [2024-05-15 17:13:17.069501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.268 [2024-05-15 17:13:17.069562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.269 [2024-05-15 17:13:17.069578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.269 [2024-05-15 17:13:17.069585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.269 [2024-05-15 17:13:17.069591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.269 [2024-05-15 17:13:17.069606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-05-15 17:13:17.079409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.269 [2024-05-15 17:13:17.079465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.269 [2024-05-15 17:13:17.079481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.269 [2024-05-15 17:13:17.079488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.269 [2024-05-15 17:13:17.079494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.269 [2024-05-15 17:13:17.079509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.269 [2024-05-15 17:13:17.089532] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.269 [2024-05-15 17:13:17.089589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.269 [2024-05-15 17:13:17.089605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.269 [2024-05-15 17:13:17.089613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.269 [2024-05-15 17:13:17.089619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.269 [2024-05-15 17:13:17.089634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.269 qpair failed and we were unable to recover it. 00:28:38.532 [2024-05-15 17:13:17.099574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.532 [2024-05-15 17:13:17.099626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.532 [2024-05-15 17:13:17.099641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.532 [2024-05-15 17:13:17.099649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.532 [2024-05-15 17:13:17.099656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.532 [2024-05-15 17:13:17.099670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.532 qpair failed and we were unable to recover it. 00:28:38.532 [2024-05-15 17:13:17.109590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.532 [2024-05-15 17:13:17.109640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.532 [2024-05-15 17:13:17.109658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.532 [2024-05-15 17:13:17.109666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.532 [2024-05-15 17:13:17.109672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.532 [2024-05-15 17:13:17.109686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.532 qpair failed and we were unable to recover it. 00:28:38.532 [2024-05-15 17:13:17.119640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.532 [2024-05-15 17:13:17.119732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.119747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.119754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.119760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.119775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.129566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.129679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.129695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.129706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.129713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.129728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.139651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.139700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.139716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.139723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.139730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.139744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.149745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.149798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.149813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.149820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.149827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.149845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.159756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.159806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.159820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.159827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.159833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.159847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.169796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.169856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.169871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.169878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.169884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.169898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.179783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.179833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.179848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.179855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.179861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.179875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.189797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.189845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.189859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.189866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.189872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.189886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.199845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.199929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.199944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.199951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.199957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.199972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.209863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.209917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.209932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.209939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.209945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.209958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.219883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.219936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.219951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.219958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.219966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.219980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.229916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.229967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.229982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.229989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.229995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.230009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.239987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.240054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.240069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.240076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.240086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.240100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.249990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.533 [2024-05-15 17:13:17.250050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.533 [2024-05-15 17:13:17.250064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.533 [2024-05-15 17:13:17.250071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.533 [2024-05-15 17:13:17.250078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.533 [2024-05-15 17:13:17.250092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.533 qpair failed and we were unable to recover it. 00:28:38.533 [2024-05-15 17:13:17.260010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.534 [2024-05-15 17:13:17.260059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.534 [2024-05-15 17:13:17.260074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.534 [2024-05-15 17:13:17.260081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.534 [2024-05-15 17:13:17.260087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.534 [2024-05-15 17:13:17.260101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.534 qpair failed and we were unable to recover it. 00:28:38.534 [2024-05-15 17:13:17.270029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.534 [2024-05-15 17:13:17.270082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.534 [2024-05-15 17:13:17.270097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.534 [2024-05-15 17:13:17.270104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.534 [2024-05-15 17:13:17.270110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.534 [2024-05-15 17:13:17.270124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.534 qpair failed and we were unable to recover it. 00:28:38.534 [2024-05-15 17:13:17.280075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.534 [2024-05-15 17:13:17.280128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.534 [2024-05-15 17:13:17.280142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.534 [2024-05-15 17:13:17.280149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.534 [2024-05-15 17:13:17.280155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.534 [2024-05-15 17:13:17.280169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.534 qpair failed and we were unable to recover it. 00:28:38.534 [2024-05-15 17:13:17.290081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.534 [2024-05-15 17:13:17.290134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.534 [2024-05-15 17:13:17.290149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.534 [2024-05-15 17:13:17.290156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.534 [2024-05-15 17:13:17.290163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.534 [2024-05-15 17:13:17.290177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.534 qpair failed and we were unable to recover it. 00:28:38.534 [2024-05-15 17:13:17.300170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.534 [2024-05-15 17:13:17.300236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.534 [2024-05-15 17:13:17.300251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.534 [2024-05-15 17:13:17.300258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.534 [2024-05-15 17:13:17.300264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.534 [2024-05-15 17:13:17.300278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.534 qpair failed and we were unable to recover it. 00:28:38.534 [2024-05-15 17:13:17.310143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.534 [2024-05-15 17:13:17.310194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.534 [2024-05-15 17:13:17.310209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.534 [2024-05-15 17:13:17.310216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.534 [2024-05-15 17:13:17.310222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.534 [2024-05-15 17:13:17.310236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.534 qpair failed and we were unable to recover it. 00:28:38.534 [2024-05-15 17:13:17.320163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.534 [2024-05-15 17:13:17.320214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.534 [2024-05-15 17:13:17.320229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.534 [2024-05-15 17:13:17.320236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.534 [2024-05-15 17:13:17.320242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.534 [2024-05-15 17:13:17.320256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.534 qpair failed and we were unable to recover it. 00:28:38.534 [2024-05-15 17:13:17.330240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.534 [2024-05-15 17:13:17.330297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.534 [2024-05-15 17:13:17.330312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.534 [2024-05-15 17:13:17.330322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.534 [2024-05-15 17:13:17.330328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.534 [2024-05-15 17:13:17.330342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.534 qpair failed and we were unable to recover it. 00:28:38.534 [2024-05-15 17:13:17.340207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.534 [2024-05-15 17:13:17.340265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.534 [2024-05-15 17:13:17.340279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.534 [2024-05-15 17:13:17.340286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.534 [2024-05-15 17:13:17.340293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.534 [2024-05-15 17:13:17.340307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.534 qpair failed and we were unable to recover it. 00:28:38.534 [2024-05-15 17:13:17.350241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.534 [2024-05-15 17:13:17.350291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.534 [2024-05-15 17:13:17.350306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.534 [2024-05-15 17:13:17.350313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.534 [2024-05-15 17:13:17.350319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.534 [2024-05-15 17:13:17.350332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.534 qpair failed and we were unable to recover it. 00:28:38.534 [2024-05-15 17:13:17.360282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.534 [2024-05-15 17:13:17.360337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.534 [2024-05-15 17:13:17.360352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.534 [2024-05-15 17:13:17.360359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.534 [2024-05-15 17:13:17.360365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.534 [2024-05-15 17:13:17.360379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.534 qpair failed and we were unable to recover it. 00:28:38.797 [2024-05-15 17:13:17.370300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.797 [2024-05-15 17:13:17.370366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.797 [2024-05-15 17:13:17.370381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.797 [2024-05-15 17:13:17.370388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.797 [2024-05-15 17:13:17.370395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.797 [2024-05-15 17:13:17.370409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.797 qpair failed and we were unable to recover it. 00:28:38.797 [2024-05-15 17:13:17.380326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.380380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.380395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.380402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.380408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.380422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.390328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.390392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.390406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.390414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.390420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.390434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.400386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.400495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.400511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.400522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.400528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.400544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.410422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.410471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.410487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.410494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.410500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.410514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.420421] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.420473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.420489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.420499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.420505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.420519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.430441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.430490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.430505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.430512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.430519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.430533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.440502] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.440556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.440571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.440578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.440584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.440598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.450525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.450610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.450626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.450633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.450640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.450654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.460543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.460593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.460608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.460615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.460622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.460636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.470582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.470637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.470651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.470659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.470665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.470679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.480608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.480664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.480679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.480686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.480693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.480707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.490642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.490697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.490712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.490719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.490725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.490740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.500644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.500697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.500712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.500720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.500726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.500741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.510729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.510781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.510799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.510806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.510813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.510827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.520794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.520855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.520870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.520877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.520883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.520897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.530727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.530780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.530794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.530802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.530808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.530822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.540768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.540822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.540836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.540843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.540849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.540863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.550797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.550870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.550885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.550892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.550898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.550915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.560708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.560761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.560776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.560783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.560790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.560805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.570853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.570914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.570929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.570936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.570943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.570957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.580877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.580928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.580943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.580950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.580957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.580971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.590894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.590946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.590960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.590968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.590974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.590988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.600967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.601070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.601089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.601096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.601102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.601116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.610983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.611035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.611050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.611058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.611064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.611078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:38.798 [2024-05-15 17:13:17.620989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.798 [2024-05-15 17:13:17.621045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.798 [2024-05-15 17:13:17.621059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.798 [2024-05-15 17:13:17.621066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.798 [2024-05-15 17:13:17.621073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:38.798 [2024-05-15 17:13:17.621087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.798 qpair failed and we were unable to recover it. 00:28:39.062 [2024-05-15 17:13:17.631019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.631068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.631082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.631089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.631095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.631109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.641060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.641110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.641125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.641132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.641142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.641156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.650948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.651006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.651021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.651028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.651035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.651049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.661099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.661151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.661166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.661173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.661179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.661193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.671136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.671184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.671199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.671206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.671213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.671227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.681158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.681215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.681230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.681237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.681243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.681257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.691189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.691302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.691327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.691336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.691343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.691362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.701103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.701164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.701190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.701198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.701205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.701223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.711235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.711288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.711304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.711311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.711318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.711333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.721263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.721324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.721349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.721358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.721364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.721383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.731199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.731307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.731332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.731350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.731357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.731376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.741329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.741384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.741401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.741408] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.741415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.741430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.751214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.751265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.751280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.751287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.751293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.751308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.761250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.761301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.761317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.761324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.761330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.761344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.771412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.771466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.771481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.771488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.063 [2024-05-15 17:13:17.771494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.063 [2024-05-15 17:13:17.771508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.063 qpair failed and we were unable to recover it. 00:28:39.063 [2024-05-15 17:13:17.781406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.063 [2024-05-15 17:13:17.781456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.063 [2024-05-15 17:13:17.781471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.063 [2024-05-15 17:13:17.781477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.781484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.781497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.064 [2024-05-15 17:13:17.791509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.064 [2024-05-15 17:13:17.791615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.064 [2024-05-15 17:13:17.791630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.064 [2024-05-15 17:13:17.791638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.791645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.791659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.064 [2024-05-15 17:13:17.801497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.064 [2024-05-15 17:13:17.801562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.064 [2024-05-15 17:13:17.801578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.064 [2024-05-15 17:13:17.801585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.801591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.801605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.064 [2024-05-15 17:13:17.811506] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.064 [2024-05-15 17:13:17.811605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.064 [2024-05-15 17:13:17.811620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.064 [2024-05-15 17:13:17.811627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.811634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.811649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.064 [2024-05-15 17:13:17.821532] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.064 [2024-05-15 17:13:17.821598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.064 [2024-05-15 17:13:17.821612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.064 [2024-05-15 17:13:17.821623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.821629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.821644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.064 [2024-05-15 17:13:17.831571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.064 [2024-05-15 17:13:17.831627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.064 [2024-05-15 17:13:17.831642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.064 [2024-05-15 17:13:17.831649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.831655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.831669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.064 [2024-05-15 17:13:17.841581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.064 [2024-05-15 17:13:17.841641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.064 [2024-05-15 17:13:17.841656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.064 [2024-05-15 17:13:17.841663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.841670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.841684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.064 [2024-05-15 17:13:17.851603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.064 [2024-05-15 17:13:17.851675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.064 [2024-05-15 17:13:17.851689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.064 [2024-05-15 17:13:17.851696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.851702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.851717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.064 [2024-05-15 17:13:17.861635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.064 [2024-05-15 17:13:17.861688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.064 [2024-05-15 17:13:17.861702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.064 [2024-05-15 17:13:17.861710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.861716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.861730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.064 [2024-05-15 17:13:17.871670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.064 [2024-05-15 17:13:17.871724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.064 [2024-05-15 17:13:17.871739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.064 [2024-05-15 17:13:17.871746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.871752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.871766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.064 [2024-05-15 17:13:17.881707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.064 [2024-05-15 17:13:17.881781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.064 [2024-05-15 17:13:17.881796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.064 [2024-05-15 17:13:17.881803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.881809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.881823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.064 [2024-05-15 17:13:17.891723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.064 [2024-05-15 17:13:17.891817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.064 [2024-05-15 17:13:17.891832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.064 [2024-05-15 17:13:17.891839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.064 [2024-05-15 17:13:17.891846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.064 [2024-05-15 17:13:17.891860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.064 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:17.901771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:17.901822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:17.901837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:17.901844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:17.901851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:17.901864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:17.911794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:17.911849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:17.911868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:17.911875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:17.911881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:17.911895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:17.921786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:17.921839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:17.921854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:17.921861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:17.921867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:17.921881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:17.931829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:17.931881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:17.931896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:17.931903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:17.931909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:17.931923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:17.941844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:17.941894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:17.941908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:17.941915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:17.941922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:17.941936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:17.951888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:17.951938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:17.951953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:17.951960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:17.951966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:17.951984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:17.961823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:17.961876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:17.961890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:17.961898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:17.961904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:17.961917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:17.971912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:17.971963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:17.971978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:17.971985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:17.971992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:17.972006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:17.981843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:17.981893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:17.981907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:17.981914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:17.981921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:17.981935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:17.991988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:17.992038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:17.992053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:17.992060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:17.992066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:17.992080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:18.002011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:18.002061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:18.002079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:18.002087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:18.002093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:18.002107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:18.012056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:18.012111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:18.012127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:18.012134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:18.012142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.327 [2024-05-15 17:13:18.012156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.327 qpair failed and we were unable to recover it. 00:28:39.327 [2024-05-15 17:13:18.022077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.327 [2024-05-15 17:13:18.022130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.327 [2024-05-15 17:13:18.022145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.327 [2024-05-15 17:13:18.022152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.327 [2024-05-15 17:13:18.022158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.022172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.032070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.032142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.032158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.032165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.032172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.032186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.042145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.042197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.042211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.042218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.042228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.042242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.052146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.052217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.052242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.052251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.052258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.052277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.062188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.062246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.062271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.062279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.062286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.062305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.072207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.072259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.072276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.072285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.072291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.072307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.082234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.082294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.082319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.082327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.082334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.082354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.092283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.092347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.092364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.092371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.092378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.092394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.102295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.102351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.102366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.102374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.102380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.102395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.112315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.112364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.112380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.112387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.112394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.112408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.122350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.122400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.122415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.122422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.122429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.122443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.132368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.132427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.132442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.132449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.132459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.132474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.142356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.142409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.142424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.142431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.142438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.142451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.328 [2024-05-15 17:13:18.152408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.328 [2024-05-15 17:13:18.152460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.328 [2024-05-15 17:13:18.152475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.328 [2024-05-15 17:13:18.152482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.328 [2024-05-15 17:13:18.152488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.328 [2024-05-15 17:13:18.152503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.328 qpair failed and we were unable to recover it. 00:28:39.591 [2024-05-15 17:13:18.162448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.591 [2024-05-15 17:13:18.162506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.591 [2024-05-15 17:13:18.162520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.591 [2024-05-15 17:13:18.162530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.591 [2024-05-15 17:13:18.162538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.591 [2024-05-15 17:13:18.162566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.591 qpair failed and we were unable to recover it. 00:28:39.591 [2024-05-15 17:13:18.172481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.591 [2024-05-15 17:13:18.172542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.591 [2024-05-15 17:13:18.172565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.591 [2024-05-15 17:13:18.172574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.591 [2024-05-15 17:13:18.172580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.591 [2024-05-15 17:13:18.172596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.591 qpair failed and we were unable to recover it. 00:28:39.591 [2024-05-15 17:13:18.182470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.591 [2024-05-15 17:13:18.182521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.591 [2024-05-15 17:13:18.182536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.591 [2024-05-15 17:13:18.182543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.591 [2024-05-15 17:13:18.182555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.591 [2024-05-15 17:13:18.182569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.591 qpair failed and we were unable to recover it. 00:28:39.591 [2024-05-15 17:13:18.192492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.591 [2024-05-15 17:13:18.192541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.591 [2024-05-15 17:13:18.192561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.591 [2024-05-15 17:13:18.192569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.591 [2024-05-15 17:13:18.192575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.591 [2024-05-15 17:13:18.192589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.591 qpair failed and we were unable to recover it. 00:28:39.591 [2024-05-15 17:13:18.202555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.591 [2024-05-15 17:13:18.202604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.591 [2024-05-15 17:13:18.202619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.591 [2024-05-15 17:13:18.202626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.591 [2024-05-15 17:13:18.202633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.591 [2024-05-15 17:13:18.202647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.591 qpair failed and we were unable to recover it. 00:28:39.591 [2024-05-15 17:13:18.212582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.591 [2024-05-15 17:13:18.212637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.591 [2024-05-15 17:13:18.212652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.591 [2024-05-15 17:13:18.212659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.591 [2024-05-15 17:13:18.212666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.591 [2024-05-15 17:13:18.212680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.591 qpair failed and we were unable to recover it. 00:28:39.591 [2024-05-15 17:13:18.222591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.591 [2024-05-15 17:13:18.222692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.591 [2024-05-15 17:13:18.222707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.591 [2024-05-15 17:13:18.222718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.591 [2024-05-15 17:13:18.222725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.591 [2024-05-15 17:13:18.222739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.591 qpair failed and we were unable to recover it. 00:28:39.591 [2024-05-15 17:13:18.232659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.591 [2024-05-15 17:13:18.232709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.591 [2024-05-15 17:13:18.232724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.591 [2024-05-15 17:13:18.232731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.232737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.232751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.242680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.242733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.242748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.242755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.242761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.242776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.252682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.252745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.252759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.252767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.252773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.252787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.262728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.262782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.262796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.262804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.262811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.262825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.272726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.272777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.272792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.272799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.272805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.272819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.282754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.282806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.282821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.282828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.282834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.282847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.292806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.292861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.292875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.292882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.292888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.292902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.302809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.302859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.302873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.302880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.302887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.302900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.312856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.312907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.312925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.312932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.312939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.312952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.322899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.322954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.322969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.322976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.322983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.322997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.332941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.332994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.333009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.333017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.333023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.333037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.342913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.342966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.342981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.342988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.342994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.343009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.352960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.353033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.353047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.353054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.353062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.353080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.362975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.363024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.363039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.363047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.363053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.363067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.373025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.373079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.373094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.373101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.373108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.373122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.383029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.383080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.383095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.383102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.383108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.383122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.393065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.393114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.393129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.393136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.393142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.393156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.403091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.403145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.403163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.403170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.403177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.403191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.413121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.413176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.413192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.413199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.413206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.413220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.592 [2024-05-15 17:13:18.423124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.592 [2024-05-15 17:13:18.423180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.592 [2024-05-15 17:13:18.423195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.592 [2024-05-15 17:13:18.423202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.592 [2024-05-15 17:13:18.423209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.592 [2024-05-15 17:13:18.423223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.592 qpair failed and we were unable to recover it. 00:28:39.856 [2024-05-15 17:13:18.433166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.856 [2024-05-15 17:13:18.433232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.856 [2024-05-15 17:13:18.433247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.856 [2024-05-15 17:13:18.433254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.856 [2024-05-15 17:13:18.433261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.856 [2024-05-15 17:13:18.433275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.856 qpair failed and we were unable to recover it. 00:28:39.856 [2024-05-15 17:13:18.443095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.856 [2024-05-15 17:13:18.443150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.856 [2024-05-15 17:13:18.443165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.856 [2024-05-15 17:13:18.443172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.856 [2024-05-15 17:13:18.443186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.856 [2024-05-15 17:13:18.443201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.856 qpair failed and we were unable to recover it. 00:28:39.856 [2024-05-15 17:13:18.453217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.856 [2024-05-15 17:13:18.453282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.856 [2024-05-15 17:13:18.453307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.856 [2024-05-15 17:13:18.453316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.856 [2024-05-15 17:13:18.453324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.856 [2024-05-15 17:13:18.453342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.856 qpair failed and we were unable to recover it. 00:28:39.856 [2024-05-15 17:13:18.463242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.856 [2024-05-15 17:13:18.463298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.856 [2024-05-15 17:13:18.463323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.856 [2024-05-15 17:13:18.463331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.856 [2024-05-15 17:13:18.463338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.856 [2024-05-15 17:13:18.463357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.856 qpair failed and we were unable to recover it. 00:28:39.856 [2024-05-15 17:13:18.473266] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.856 [2024-05-15 17:13:18.473364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.856 [2024-05-15 17:13:18.473390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.856 [2024-05-15 17:13:18.473398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.856 [2024-05-15 17:13:18.473405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.856 [2024-05-15 17:13:18.473424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.856 qpair failed and we were unable to recover it. 00:28:39.856 [2024-05-15 17:13:18.483301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.856 [2024-05-15 17:13:18.483354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.856 [2024-05-15 17:13:18.483371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.856 [2024-05-15 17:13:18.483378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.856 [2024-05-15 17:13:18.483384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.856 [2024-05-15 17:13:18.483400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.856 qpair failed and we were unable to recover it. 00:28:39.856 [2024-05-15 17:13:18.493328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.856 [2024-05-15 17:13:18.493392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.856 [2024-05-15 17:13:18.493408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.856 [2024-05-15 17:13:18.493415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.856 [2024-05-15 17:13:18.493421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.856 [2024-05-15 17:13:18.493436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.856 qpair failed and we were unable to recover it. 00:28:39.856 [2024-05-15 17:13:18.503357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.856 [2024-05-15 17:13:18.503410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.856 [2024-05-15 17:13:18.503425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.856 [2024-05-15 17:13:18.503432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.856 [2024-05-15 17:13:18.503438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.856 [2024-05-15 17:13:18.503452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.856 qpair failed and we were unable to recover it. 00:28:39.856 [2024-05-15 17:13:18.513287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.856 [2024-05-15 17:13:18.513345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.856 [2024-05-15 17:13:18.513359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.856 [2024-05-15 17:13:18.513366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.856 [2024-05-15 17:13:18.513373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.856 [2024-05-15 17:13:18.513387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.856 qpair failed and we were unable to recover it. 00:28:39.856 [2024-05-15 17:13:18.523357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.856 [2024-05-15 17:13:18.523412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.856 [2024-05-15 17:13:18.523427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.856 [2024-05-15 17:13:18.523434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.856 [2024-05-15 17:13:18.523440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.523454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.533502] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.533575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.533590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.533597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.533609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.533624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.543418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.543467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.543482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.543489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.543495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.543509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.553462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.553524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.553540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.553553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.553560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.553575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.563494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.563552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.563568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.563575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.563581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.563595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.573555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.573611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.573626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.573633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.573639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.573653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.583571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.583641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.583656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.583663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.583670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.583685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.593595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.593647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.593662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.593670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.593676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.593690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.603621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.603672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.603687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.603694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.603701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.603715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.613598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.613654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.613669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.613676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.613682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.613696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.623543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.623598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.623613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.623624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.623631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.623645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.633680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.633731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.633747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.633753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.633760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.633774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.643752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.643802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.643816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.643824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.643830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.643844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.857 qpair failed and we were unable to recover it. 00:28:39.857 [2024-05-15 17:13:18.653782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.857 [2024-05-15 17:13:18.653839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.857 [2024-05-15 17:13:18.653854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.857 [2024-05-15 17:13:18.653863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.857 [2024-05-15 17:13:18.653869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.857 [2024-05-15 17:13:18.653883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.858 qpair failed and we were unable to recover it. 00:28:39.858 [2024-05-15 17:13:18.663665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.858 [2024-05-15 17:13:18.663716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.858 [2024-05-15 17:13:18.663730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.858 [2024-05-15 17:13:18.663738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.858 [2024-05-15 17:13:18.663744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.858 [2024-05-15 17:13:18.663758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.858 qpair failed and we were unable to recover it. 00:28:39.858 [2024-05-15 17:13:18.673835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.858 [2024-05-15 17:13:18.673887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.858 [2024-05-15 17:13:18.673902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.858 [2024-05-15 17:13:18.673909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.858 [2024-05-15 17:13:18.673916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.858 [2024-05-15 17:13:18.673930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.858 qpair failed and we were unable to recover it. 00:28:39.858 [2024-05-15 17:13:18.683853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.858 [2024-05-15 17:13:18.683901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.858 [2024-05-15 17:13:18.683917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.858 [2024-05-15 17:13:18.683925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.858 [2024-05-15 17:13:18.683931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:39.858 [2024-05-15 17:13:18.683945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.858 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.693868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.693963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.122 [2024-05-15 17:13:18.693978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.122 [2024-05-15 17:13:18.693985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.122 [2024-05-15 17:13:18.693992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.122 [2024-05-15 17:13:18.694006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.122 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.703881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.703945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.122 [2024-05-15 17:13:18.703960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.122 [2024-05-15 17:13:18.703968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.122 [2024-05-15 17:13:18.703974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.122 [2024-05-15 17:13:18.703988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.122 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.713918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.713998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.122 [2024-05-15 17:13:18.714016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.122 [2024-05-15 17:13:18.714024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.122 [2024-05-15 17:13:18.714030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.122 [2024-05-15 17:13:18.714045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.122 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.723935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.723991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.122 [2024-05-15 17:13:18.724005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.122 [2024-05-15 17:13:18.724012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.122 [2024-05-15 17:13:18.724019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.122 [2024-05-15 17:13:18.724033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.122 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.733976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.734034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.122 [2024-05-15 17:13:18.734049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.122 [2024-05-15 17:13:18.734056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.122 [2024-05-15 17:13:18.734062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.122 [2024-05-15 17:13:18.734076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.122 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.743977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.744026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.122 [2024-05-15 17:13:18.744041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.122 [2024-05-15 17:13:18.744048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.122 [2024-05-15 17:13:18.744055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.122 [2024-05-15 17:13:18.744068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.122 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.754036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.754084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.122 [2024-05-15 17:13:18.754099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.122 [2024-05-15 17:13:18.754106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.122 [2024-05-15 17:13:18.754112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.122 [2024-05-15 17:13:18.754130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.122 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.764111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.764173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.122 [2024-05-15 17:13:18.764188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.122 [2024-05-15 17:13:18.764195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.122 [2024-05-15 17:13:18.764202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.122 [2024-05-15 17:13:18.764215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.122 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.774085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.774139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.122 [2024-05-15 17:13:18.774153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.122 [2024-05-15 17:13:18.774161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.122 [2024-05-15 17:13:18.774167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.122 [2024-05-15 17:13:18.774181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.122 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.784281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.784380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.122 [2024-05-15 17:13:18.784396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.122 [2024-05-15 17:13:18.784403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.122 [2024-05-15 17:13:18.784409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.122 [2024-05-15 17:13:18.784423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.122 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.794136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.794189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.122 [2024-05-15 17:13:18.794204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.122 [2024-05-15 17:13:18.794211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.122 [2024-05-15 17:13:18.794217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.122 [2024-05-15 17:13:18.794231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.122 qpair failed and we were unable to recover it. 00:28:40.122 [2024-05-15 17:13:18.804165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.122 [2024-05-15 17:13:18.804220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.804238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.804245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.804252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.804265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.814183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.814254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.814269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.814276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.814283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.814297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.824225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.824276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.824291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.824298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.824304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.824319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.834111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.834164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.834179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.834186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.834192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.834205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.844276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.844330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.844345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.844352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.844358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.844377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.854173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.854229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.854244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.854251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.854257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.854272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.864192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.864246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.864261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.864268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.864274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.864288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.874351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.874401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.874415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.874423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.874429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.874443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.884249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.884302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.884317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.884324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.884331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.884345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.894393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.894454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.894469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.894476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.894482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.894496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.904427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.904481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.904496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.904503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.904509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.904523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.914454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.914508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.914523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.914530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.914537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.914555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.924352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.924404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.924419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.924426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.924433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.924447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.934512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.934574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.934588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.934596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.934606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.934621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.123 [2024-05-15 17:13:18.944525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.123 [2024-05-15 17:13:18.944582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.123 [2024-05-15 17:13:18.944597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.123 [2024-05-15 17:13:18.944605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.123 [2024-05-15 17:13:18.944611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.123 [2024-05-15 17:13:18.944625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.123 qpair failed and we were unable to recover it. 00:28:40.386 [2024-05-15 17:13:18.954433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.386 [2024-05-15 17:13:18.954484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.386 [2024-05-15 17:13:18.954499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.386 [2024-05-15 17:13:18.954506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.386 [2024-05-15 17:13:18.954513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.386 [2024-05-15 17:13:18.954527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.386 qpair failed and we were unable to recover it. 00:28:40.386 [2024-05-15 17:13:18.964598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.386 [2024-05-15 17:13:18.964648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.386 [2024-05-15 17:13:18.964663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.386 [2024-05-15 17:13:18.964670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.386 [2024-05-15 17:13:18.964676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.386 [2024-05-15 17:13:18.964690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.386 qpair failed and we were unable to recover it. 00:28:40.386 [2024-05-15 17:13:18.974509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.386 [2024-05-15 17:13:18.974563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.386 [2024-05-15 17:13:18.974577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.386 [2024-05-15 17:13:18.974584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.386 [2024-05-15 17:13:18.974591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.386 [2024-05-15 17:13:18.974605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.386 qpair failed and we were unable to recover it. 00:28:40.386 [2024-05-15 17:13:18.984556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.386 [2024-05-15 17:13:18.984639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.386 [2024-05-15 17:13:18.984654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.386 [2024-05-15 17:13:18.984662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.386 [2024-05-15 17:13:18.984668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.386 [2024-05-15 17:13:18.984682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.386 qpair failed and we were unable to recover it. 00:28:40.386 [2024-05-15 17:13:18.994681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.386 [2024-05-15 17:13:18.994736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.386 [2024-05-15 17:13:18.994750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.386 [2024-05-15 17:13:18.994758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.386 [2024-05-15 17:13:18.994764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.386 [2024-05-15 17:13:18.994778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.386 qpair failed and we were unable to recover it. 00:28:40.386 [2024-05-15 17:13:19.004671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.004724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.004739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.004746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.004753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.004766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.014713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.014768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.014782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.014789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.014796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.014810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.024755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.024805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.024820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.024831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.024837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.024851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.034795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.034848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.034862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.034869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.034875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.034889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.044808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.044879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.044894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.044901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.044907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.044922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.054834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.054942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.054959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.054970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.054976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.054991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.064857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.064906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.064922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.064929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.064935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.064949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.074846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.074902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.074917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.074924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.074930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.074944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.084802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.084855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.084870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.084878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.084884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.084899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.094907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.094998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.095014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.095021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.095028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.095042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.104996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.105081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.105096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.105103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.105110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.105124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.114991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.115045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.115060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.115070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.115077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.115091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.125035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.125086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.125101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.125108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.125115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.125128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.135076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.387 [2024-05-15 17:13:19.135130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.387 [2024-05-15 17:13:19.135145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.387 [2024-05-15 17:13:19.135153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.387 [2024-05-15 17:13:19.135159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.387 [2024-05-15 17:13:19.135173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.387 qpair failed and we were unable to recover it. 00:28:40.387 [2024-05-15 17:13:19.145079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.388 [2024-05-15 17:13:19.145137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.388 [2024-05-15 17:13:19.145153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.388 [2024-05-15 17:13:19.145160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.388 [2024-05-15 17:13:19.145166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f720c000b90 00:28:40.388 [2024-05-15 17:13:19.145180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.388 qpair failed and we were unable to recover it. 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 [2024-05-15 17:13:19.146057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Write completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 [2024-05-15 17:13:19.146297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.388 [2024-05-15 17:13:19.155114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.388 [2024-05-15 17:13:19.155162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.388 [2024-05-15 17:13:19.155176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.388 [2024-05-15 17:13:19.155182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.388 [2024-05-15 17:13:19.155187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7214000b90 00:28:40.388 [2024-05-15 17:13:19.155202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.388 qpair failed and we were unable to recover it. 00:28:40.388 [2024-05-15 17:13:19.165140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.388 [2024-05-15 17:13:19.165185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.388 [2024-05-15 17:13:19.165197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.388 [2024-05-15 17:13:19.165202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.388 [2024-05-15 17:13:19.165207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7214000b90 00:28:40.388 [2024-05-15 17:13:19.165217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.388 qpair failed and we were unable to recover it. 00:28:40.388 [2024-05-15 17:13:19.175237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.388 [2024-05-15 17:13:19.175379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.388 [2024-05-15 17:13:19.175443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.388 [2024-05-15 17:13:19.175467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.388 [2024-05-15 17:13:19.175488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f721c000b90 00:28:40.388 [2024-05-15 17:13:19.175541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.388 qpair failed and we were unable to recover it. 00:28:40.388 [2024-05-15 17:13:19.185241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.388 [2024-05-15 17:13:19.185333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.388 [2024-05-15 17:13:19.185386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.388 [2024-05-15 17:13:19.185406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.388 [2024-05-15 17:13:19.185422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f721c000b90 00:28:40.388 [2024-05-15 17:13:19.185464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.388 qpair failed and we were unable to recover it. 00:28:40.388 [2024-05-15 17:13:19.185846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a59ec0 is same with the state(5) to be set 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.388 starting I/O failed 00:28:40.388 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Write completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Write completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Write completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Write completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Write completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 Read completed with error (sct=0, sc=8) 00:28:40.389 starting I/O failed 00:28:40.389 [2024-05-15 17:13:19.186313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.389 [2024-05-15 17:13:19.195231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.389 [2024-05-15 17:13:19.195287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.389 [2024-05-15 17:13:19.195308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.389 [2024-05-15 17:13:19.195317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.389 [2024-05-15 17:13:19.195323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a63350 00:28:40.389 [2024-05-15 17:13:19.195339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.389 qpair failed and we were unable to recover it. 00:28:40.389 [2024-05-15 17:13:19.205236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.389 [2024-05-15 17:13:19.205331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.389 [2024-05-15 17:13:19.205356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.389 [2024-05-15 17:13:19.205365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.389 [2024-05-15 17:13:19.205372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a63350 00:28:40.389 [2024-05-15 17:13:19.205391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.389 qpair failed and we were unable to recover it. 00:28:40.389 [2024-05-15 17:13:19.205865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a59ec0 (9): Bad file descriptor 00:28:40.389 Initializing NVMe Controllers 00:28:40.389 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:40.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:40.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:40.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:40.389 Initialization complete. Launching workers. 00:28:40.389 Starting thread on core 1 00:28:40.389 Starting thread on core 2 00:28:40.389 Starting thread on core 3 00:28:40.389 Starting thread on core 0 00:28:40.389 17:13:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:40.389 00:28:40.389 real 0m11.295s 00:28:40.389 user 0m21.570s 00:28:40.389 sys 0m3.769s 00:28:40.389 17:13:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:40.389 17:13:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:40.389 ************************************ 00:28:40.389 END TEST nvmf_target_disconnect_tc2 00:28:40.389 ************************************ 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:40.650 rmmod nvme_tcp 00:28:40.650 rmmod nvme_fabrics 00:28:40.650 rmmod nvme_keyring 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1646506 ']' 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1646506 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1646506 ']' 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 1646506 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1646506 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1646506' 00:28:40.650 killing process with pid 1646506 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 1646506 00:28:40.650 [2024-05-15 17:13:19.372901] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:40.650 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 1646506 00:28:40.911 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:40.911 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:40.911 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:40.911 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:40.911 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:40.911 17:13:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.911 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:40.911 17:13:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.829 17:13:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:42.829 00:28:42.829 real 0m21.104s 00:28:42.829 user 0m48.738s 00:28:42.829 sys 0m9.470s 00:28:42.829 17:13:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:42.829 17:13:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:42.829 ************************************ 00:28:42.829 END TEST nvmf_target_disconnect 00:28:42.829 ************************************ 00:28:42.829 17:13:21 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:28:42.829 17:13:21 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.829 17:13:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:42.829 17:13:21 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:42.829 00:28:42.829 real 22m22.911s 00:28:42.829 user 47m43.598s 00:28:42.829 sys 6m56.118s 00:28:42.829 17:13:21 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:42.829 17:13:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:42.829 ************************************ 00:28:42.829 END TEST nvmf_tcp 00:28:42.829 ************************************ 00:28:43.091 17:13:21 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:28:43.091 17:13:21 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:43.091 17:13:21 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:43.091 17:13:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:43.091 17:13:21 -- common/autotest_common.sh@10 -- # set +x 00:28:43.091 ************************************ 00:28:43.091 START TEST spdkcli_nvmf_tcp 00:28:43.091 ************************************ 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:43.091 * Looking for test storage... 00:28:43.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.091 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1648300 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1648300 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 1648300 ']' 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:43.092 17:13:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.092 [2024-05-15 17:13:21.886646] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:28:43.092 [2024-05-15 17:13:21.886723] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648300 ] 00:28:43.092 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.353 [2024-05-15 17:13:21.950613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:43.353 [2024-05-15 17:13:22.026843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.353 [2024-05-15 17:13:22.026933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.926 17:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:43.926 17:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:28:43.926 17:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:43.926 17:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.926 17:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.926 17:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:43.926 17:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:43.926 17:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:43.926 17:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:43.926 17:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.926 17:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:43.926 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:43.926 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:43.926 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:43.926 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:43.926 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:43.926 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:43.926 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:43.926 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:43.926 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:43.926 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:43.926 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:43.926 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:43.926 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:43.926 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:43.926 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:43.926 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:43.926 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:43.926 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:43.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:43.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:43.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:43.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:43.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:43.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:43.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:43.927 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:43.927 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:43.927 ' 00:28:46.472 [2024-05-15 17:13:25.046362] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.414 [2024-05-15 17:13:26.209799] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:47.414 [2024-05-15 17:13:26.210136] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:49.961 [2024-05-15 17:13:28.344437] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:51.876 [2024-05-15 17:13:30.182127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:52.822 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:52.822 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:52.822 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:52.822 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:52.822 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:52.822 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:52.822 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:52.822 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:52.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:52.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:52.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:52.822 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:52.823 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:52.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:52.823 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:53.084 17:13:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:53.084 17:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.084 17:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.084 17:13:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:53.084 17:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:53.084 17:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.084 17:13:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:28:53.084 17:13:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:53.345 17:13:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:53.345 17:13:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:53.345 17:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:53.345 17:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.345 17:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.345 17:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:53.345 17:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:53.345 17:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.345 17:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:53.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:53.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:53.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:53.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:53.345 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:53.345 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:53.345 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:53.345 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:53.345 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:53.345 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:53.345 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:53.345 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:53.345 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:53.345 ' 00:28:58.637 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:58.637 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:58.637 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:58.637 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:58.637 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:58.637 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:58.637 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:58.637 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:58.637 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:58.637 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:58.637 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:58.637 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:58.637 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:58.637 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1648300 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1648300 ']' 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1648300 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1648300 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1648300' 00:28:58.637 killing process with pid 1648300 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 1648300 00:28:58.637 [2024-05-15 17:13:37.123246] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 1648300 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1648300 ']' 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1648300 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1648300 ']' 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1648300 00:28:58.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1648300) - No such process 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 1648300 is not found' 00:28:58.637 Process with pid 1648300 is not found 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:58.637 00:28:58.637 real 0m15.564s 00:28:58.637 user 0m32.029s 00:28:58.637 sys 0m0.699s 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:58.637 17:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:58.637 ************************************ 00:28:58.637 END TEST spdkcli_nvmf_tcp 00:28:58.637 ************************************ 00:28:58.637 17:13:37 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:58.637 17:13:37 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:58.637 17:13:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:58.637 17:13:37 -- common/autotest_common.sh@10 -- # set +x 00:28:58.637 ************************************ 00:28:58.637 START TEST nvmf_identify_passthru 00:28:58.637 ************************************ 00:28:58.637 17:13:37 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:58.637 * Looking for test storage... 00:28:58.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:58.637 17:13:37 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:58.637 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.638 17:13:37 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.638 17:13:37 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.638 17:13:37 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.638 17:13:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.638 17:13:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.638 17:13:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.638 17:13:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:58.638 17:13:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:58.638 17:13:37 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.638 17:13:37 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.638 17:13:37 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.638 17:13:37 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.638 17:13:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.638 17:13:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.638 17:13:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.638 17:13:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:58.638 17:13:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.638 17:13:37 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.638 17:13:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:58.638 17:13:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:58.638 17:13:37 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:58.638 17:13:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.783 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:06.784 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:06.784 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:06.784 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:06.784 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:06.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:29:06.784 00:29:06.784 --- 10.0.0.2 ping statistics --- 00:29:06.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.784 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:29:06.784 00:29:06.784 --- 10.0.0.1 ping statistics --- 00:29:06.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.784 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:06.784 17:13:44 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:06.784 17:13:44 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:06.784 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:06.784 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:06.784 17:13:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:06.784 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:29:06.784 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:29:06.784 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:29:06.784 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:29:06.784 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:06.784 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:29:06.785 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:06.785 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:06.785 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:06.785 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:06.785 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:29:06.785 17:13:44 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:65:00.0 00:29:06.785 17:13:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:29:06.785 17:13:44 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:29:06.785 17:13:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:06.785 17:13:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:06.785 17:13:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:06.785 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.785 17:13:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:29:06.785 17:13:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:06.785 17:13:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:06.785 17:13:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:06.785 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.785 17:13:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:29:06.785 17:13:45 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:06.785 17:13:45 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.785 17:13:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:06.785 17:13:45 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:06.785 17:13:45 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:06.785 17:13:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:06.785 17:13:45 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1655237 00:29:06.785 17:13:45 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:06.785 17:13:45 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:06.785 17:13:45 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1655237 00:29:06.785 17:13:45 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 1655237 ']' 00:29:06.785 17:13:45 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.785 17:13:45 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:06.785 17:13:45 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.785 17:13:45 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:06.785 17:13:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:06.785 [2024-05-15 17:13:45.609994] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:29:06.785 [2024-05-15 17:13:45.610049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.046 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.046 [2024-05-15 17:13:45.677745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.046 [2024-05-15 17:13:45.747556] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.046 [2024-05-15 17:13:45.747593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.046 [2024-05-15 17:13:45.747600] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.046 [2024-05-15 17:13:45.747607] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.046 [2024-05-15 17:13:45.747612] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.046 [2024-05-15 17:13:45.747674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.046 [2024-05-15 17:13:45.747805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.046 [2024-05-15 17:13:45.747960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.046 [2024-05-15 17:13:45.747961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.619 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:07.619 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:29:07.619 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:07.619 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.619 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:07.619 INFO: Log level set to 20 00:29:07.619 INFO: Requests: 00:29:07.619 { 00:29:07.619 "jsonrpc": "2.0", 00:29:07.619 "method": "nvmf_set_config", 00:29:07.619 "id": 1, 00:29:07.619 "params": { 00:29:07.619 "admin_cmd_passthru": { 00:29:07.619 "identify_ctrlr": true 00:29:07.619 } 00:29:07.619 } 00:29:07.619 } 00:29:07.619 00:29:07.619 INFO: response: 00:29:07.619 { 00:29:07.619 "jsonrpc": "2.0", 00:29:07.619 "id": 1, 00:29:07.619 "result": true 00:29:07.619 } 00:29:07.619 00:29:07.619 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.619 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:07.619 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.619 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:07.619 INFO: Setting log level to 20 00:29:07.619 INFO: Setting log level to 20 00:29:07.619 INFO: Log level set to 20 00:29:07.619 INFO: Log level set to 20 00:29:07.619 INFO: Requests: 00:29:07.619 { 00:29:07.619 "jsonrpc": "2.0", 00:29:07.619 "method": "framework_start_init", 00:29:07.619 "id": 1 00:29:07.619 } 00:29:07.619 00:29:07.619 INFO: Requests: 00:29:07.619 { 00:29:07.619 "jsonrpc": "2.0", 00:29:07.619 "method": "framework_start_init", 00:29:07.619 "id": 1 00:29:07.619 } 00:29:07.619 00:29:07.881 [2024-05-15 17:13:46.466286] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:07.881 INFO: response: 00:29:07.881 { 00:29:07.881 "jsonrpc": "2.0", 00:29:07.881 "id": 1, 00:29:07.881 "result": true 00:29:07.881 } 00:29:07.881 00:29:07.881 INFO: response: 00:29:07.881 { 00:29:07.881 "jsonrpc": "2.0", 00:29:07.881 "id": 1, 00:29:07.881 "result": true 00:29:07.881 } 00:29:07.881 00:29:07.881 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.881 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.881 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.881 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:07.881 INFO: Setting log level to 40 00:29:07.881 INFO: Setting log level to 40 00:29:07.881 INFO: Setting log level to 40 00:29:07.881 [2024-05-15 17:13:46.479540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.881 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.881 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:07.881 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.881 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:07.881 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:29:07.881 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.881 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:08.143 Nvme0n1 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.143 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.143 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.143 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:08.143 [2024-05-15 17:13:46.860668] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:08.143 [2024-05-15 17:13:46.860919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.143 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:08.143 [ 00:29:08.143 { 00:29:08.143 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:08.143 "subtype": "Discovery", 00:29:08.143 "listen_addresses": [], 00:29:08.143 "allow_any_host": true, 00:29:08.143 "hosts": [] 00:29:08.143 }, 00:29:08.143 { 00:29:08.143 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.143 "subtype": "NVMe", 00:29:08.143 "listen_addresses": [ 00:29:08.143 { 00:29:08.143 "trtype": "TCP", 00:29:08.143 "adrfam": "IPv4", 00:29:08.143 "traddr": "10.0.0.2", 00:29:08.143 "trsvcid": "4420" 00:29:08.143 } 00:29:08.143 ], 00:29:08.143 "allow_any_host": true, 00:29:08.143 "hosts": [], 00:29:08.143 "serial_number": "SPDK00000000000001", 00:29:08.143 "model_number": "SPDK bdev Controller", 00:29:08.143 "max_namespaces": 1, 00:29:08.143 "min_cntlid": 1, 00:29:08.143 "max_cntlid": 65519, 00:29:08.143 "namespaces": [ 00:29:08.143 { 00:29:08.143 "nsid": 1, 00:29:08.143 "bdev_name": "Nvme0n1", 00:29:08.143 "name": "Nvme0n1", 00:29:08.143 "nguid": "3634473052605487002538450000003C", 00:29:08.143 "uuid": "36344730-5260-5487-0025-38450000003c" 00:29:08.143 } 00:29:08.143 ] 00:29:08.143 } 00:29:08.143 ] 00:29:08.143 17:13:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.143 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:08.143 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:08.143 17:13:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:08.143 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.404 17:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:29:08.405 17:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:08.405 17:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:08.405 17:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:08.405 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.666 17:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:29:08.666 17:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:29:08.666 17:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:29:08.666 17:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:08.666 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.666 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:08.667 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.667 17:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:08.667 17:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:08.667 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:08.667 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:08.667 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:08.667 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:08.667 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:08.667 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:08.667 rmmod nvme_tcp 00:29:08.667 rmmod nvme_fabrics 00:29:08.667 rmmod nvme_keyring 00:29:08.667 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:08.667 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:08.667 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:08.667 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1655237 ']' 00:29:08.667 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1655237 00:29:08.667 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 1655237 ']' 00:29:08.667 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 1655237 00:29:08.667 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:29:08.667 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:08.667 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1655237 00:29:08.667 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:08.667 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:08.667 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1655237' 00:29:08.667 killing process with pid 1655237 00:29:08.667 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 1655237 00:29:08.667 [2024-05-15 17:13:47.432858] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:08.667 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 1655237 00:29:08.928 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:08.928 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:08.928 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:08.928 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:08.928 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:08.928 17:13:47 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.928 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:08.928 17:13:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.474 17:13:49 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:11.474 00:29:11.474 real 0m12.461s 00:29:11.474 user 0m10.055s 00:29:11.474 sys 0m5.914s 00:29:11.474 17:13:49 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:11.474 17:13:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:11.474 ************************************ 00:29:11.474 END TEST nvmf_identify_passthru 00:29:11.474 ************************************ 00:29:11.474 17:13:49 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:11.474 17:13:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:11.474 17:13:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:11.475 17:13:49 -- common/autotest_common.sh@10 -- # set +x 00:29:11.475 ************************************ 00:29:11.475 START TEST nvmf_dif 00:29:11.475 ************************************ 00:29:11.475 17:13:49 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:11.475 * Looking for test storage... 00:29:11.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:11.475 17:13:49 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.475 17:13:49 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.475 17:13:49 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.475 17:13:49 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.475 17:13:49 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.475 17:13:49 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.475 17:13:49 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.475 17:13:49 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:11.475 17:13:49 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:11.475 17:13:49 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:11.475 17:13:49 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:11.475 17:13:49 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:11.475 17:13:49 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:11.475 17:13:49 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.475 17:13:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:11.475 17:13:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:11.475 17:13:49 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:11.475 17:13:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:18.188 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:18.188 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:18.188 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:18.188 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.188 17:13:56 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:18.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:29:18.189 00:29:18.189 --- 10.0.0.2 ping statistics --- 00:29:18.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.189 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:29:18.189 00:29:18.189 --- 10.0.0.1 ping statistics --- 00:29:18.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.189 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:18.189 17:13:56 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:21.502 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:21.502 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:21.502 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:22.075 17:14:00 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.075 17:14:00 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:22.075 17:14:00 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:22.075 17:14:00 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.075 17:14:00 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:22.075 17:14:00 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:22.075 17:14:00 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:22.075 17:14:00 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:22.075 17:14:00 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:22.075 17:14:00 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:22.075 17:14:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:22.075 17:14:00 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1661056 00:29:22.075 17:14:00 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1661056 00:29:22.075 17:14:00 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:22.075 17:14:00 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 1661056 ']' 00:29:22.075 17:14:00 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.075 17:14:00 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:22.075 17:14:00 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.075 17:14:00 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:22.075 17:14:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:22.075 [2024-05-15 17:14:00.721876] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:29:22.075 [2024-05-15 17:14:00.721942] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.075 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.075 [2024-05-15 17:14:00.794724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.075 [2024-05-15 17:14:00.869909] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.075 [2024-05-15 17:14:00.869946] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.075 [2024-05-15 17:14:00.869953] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.075 [2024-05-15 17:14:00.869960] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.075 [2024-05-15 17:14:00.869966] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.076 [2024-05-15 17:14:00.869992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.020 17:14:01 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:23.020 17:14:01 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:29:23.020 17:14:01 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:23.020 17:14:01 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.020 17:14:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:23.020 17:14:01 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.020 17:14:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:23.020 17:14:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:23.020 17:14:01 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.020 17:14:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:23.020 [2024-05-15 17:14:01.553222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.020 17:14:01 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.020 17:14:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:23.020 17:14:01 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:23.020 17:14:01 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:23.020 17:14:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:23.020 ************************************ 00:29:23.020 START TEST fio_dif_1_default 00:29:23.020 ************************************ 00:29:23.020 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:29:23.020 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:23.020 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:23.020 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:23.021 bdev_null0 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:23.021 [2024-05-15 17:14:01.597282] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:23.021 [2024-05-15 17:14:01.597502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.021 { 00:29:23.021 "params": { 00:29:23.021 "name": "Nvme$subsystem", 00:29:23.021 "trtype": "$TEST_TRANSPORT", 00:29:23.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.021 "adrfam": "ipv4", 00:29:23.021 "trsvcid": "$NVMF_PORT", 00:29:23.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.021 "hdgst": ${hdgst:-false}, 00:29:23.021 "ddgst": ${ddgst:-false} 00:29:23.021 }, 00:29:23.021 "method": "bdev_nvme_attach_controller" 00:29:23.021 } 00:29:23.021 EOF 00:29:23.021 )") 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:23.021 "params": { 00:29:23.021 "name": "Nvme0", 00:29:23.021 "trtype": "tcp", 00:29:23.021 "traddr": "10.0.0.2", 00:29:23.021 "adrfam": "ipv4", 00:29:23.021 "trsvcid": "4420", 00:29:23.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.021 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:23.021 "hdgst": false, 00:29:23.021 "ddgst": false 00:29:23.021 }, 00:29:23.021 "method": "bdev_nvme_attach_controller" 00:29:23.021 }' 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:23.021 17:14:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.283 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:23.283 fio-3.35 00:29:23.283 Starting 1 thread 00:29:23.283 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.514 00:29:35.514 filename0: (groupid=0, jobs=1): err= 0: pid=1661586: Wed May 15 17:14:12 2024 00:29:35.514 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10037msec) 00:29:35.514 slat (nsec): min=5669, max=37207, avg=6416.02, stdev=1454.17 00:29:35.514 clat (usec): min=671, max=41955, avg=21068.74, stdev=20108.09 00:29:35.514 lat (usec): min=677, max=41992, avg=21075.15, stdev=20108.08 00:29:35.514 clat percentiles (usec): 00:29:35.514 | 1.00th=[ 717], 5.00th=[ 840], 10.00th=[ 865], 20.00th=[ 889], 00:29:35.514 | 30.00th=[ 898], 40.00th=[ 906], 50.00th=[41157], 60.00th=[41157], 00:29:35.514 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:35.514 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:29:35.514 | 99.99th=[42206] 00:29:35.514 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=760.00, stdev=25.16, samples=20 00:29:35.514 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:29:35.514 lat (usec) : 750=1.63%, 1000=48.16% 00:29:35.514 lat (msec) : 50=50.21% 00:29:35.514 cpu : usr=95.36%, sys=4.45%, ctx=11, majf=0, minf=215 00:29:35.514 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.514 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.514 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:35.514 00:29:35.514 Run status group 0 (all jobs): 00:29:35.514 READ: bw=759KiB/s (777kB/s), 759KiB/s-759KiB/s (777kB/s-777kB/s), io=7616KiB (7799kB), run=10037-10037msec 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.514 00:29:35.514 real 0m11.067s 00:29:35.514 user 0m24.769s 00:29:35.514 sys 0m0.741s 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 ************************************ 00:29:35.514 END TEST fio_dif_1_default 00:29:35.514 ************************************ 00:29:35.514 17:14:12 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:35.514 17:14:12 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:35.514 17:14:12 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 ************************************ 00:29:35.514 START TEST fio_dif_1_multi_subsystems 00:29:35.514 ************************************ 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 bdev_null0 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 [2024-05-15 17:14:12.702525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 bdev_null1 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.514 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.515 { 00:29:35.515 "params": { 00:29:35.515 "name": "Nvme$subsystem", 00:29:35.515 "trtype": "$TEST_TRANSPORT", 00:29:35.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.515 "adrfam": "ipv4", 00:29:35.515 "trsvcid": "$NVMF_PORT", 00:29:35.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.515 "hdgst": ${hdgst:-false}, 00:29:35.515 "ddgst": ${ddgst:-false} 00:29:35.515 }, 00:29:35.515 "method": "bdev_nvme_attach_controller" 00:29:35.515 } 00:29:35.515 EOF 00:29:35.515 )") 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.515 { 00:29:35.515 "params": { 00:29:35.515 "name": "Nvme$subsystem", 00:29:35.515 "trtype": "$TEST_TRANSPORT", 00:29:35.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.515 "adrfam": "ipv4", 00:29:35.515 "trsvcid": "$NVMF_PORT", 00:29:35.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.515 "hdgst": ${hdgst:-false}, 00:29:35.515 "ddgst": ${ddgst:-false} 00:29:35.515 }, 00:29:35.515 "method": "bdev_nvme_attach_controller" 00:29:35.515 } 00:29:35.515 EOF 00:29:35.515 )") 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:35.515 "params": { 00:29:35.515 "name": "Nvme0", 00:29:35.515 "trtype": "tcp", 00:29:35.515 "traddr": "10.0.0.2", 00:29:35.515 "adrfam": "ipv4", 00:29:35.515 "trsvcid": "4420", 00:29:35.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.515 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.515 "hdgst": false, 00:29:35.515 "ddgst": false 00:29:35.515 }, 00:29:35.515 "method": "bdev_nvme_attach_controller" 00:29:35.515 },{ 00:29:35.515 "params": { 00:29:35.515 "name": "Nvme1", 00:29:35.515 "trtype": "tcp", 00:29:35.515 "traddr": "10.0.0.2", 00:29:35.515 "adrfam": "ipv4", 00:29:35.515 "trsvcid": "4420", 00:29:35.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.515 "hdgst": false, 00:29:35.515 "ddgst": false 00:29:35.515 }, 00:29:35.515 "method": "bdev_nvme_attach_controller" 00:29:35.515 }' 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:35.515 17:14:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.515 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:35.515 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:35.515 fio-3.35 00:29:35.515 Starting 2 threads 00:29:35.515 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.511 00:29:45.511 filename0: (groupid=0, jobs=1): err= 0: pid=1663816: Wed May 15 17:14:23 2024 00:29:45.511 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10037msec) 00:29:45.511 slat (nsec): min=5667, max=32065, avg=5959.44, stdev=1233.79 00:29:45.511 clat (usec): min=40949, max=42395, avg=41977.62, stdev=93.92 00:29:45.511 lat (usec): min=40955, max=42424, avg=41983.58, stdev=94.20 00:29:45.511 clat percentiles (usec): 00:29:45.511 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:29:45.511 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:29:45.511 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:45.511 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:45.511 | 99.99th=[42206] 00:29:45.511 bw ( KiB/s): min= 352, max= 384, per=33.91%, avg=380.80, stdev= 9.85, samples=20 00:29:45.511 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:29:45.511 lat (msec) : 50=100.00% 00:29:45.511 cpu : usr=97.25%, sys=2.55%, ctx=25, majf=0, minf=114 00:29:45.511 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:45.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.511 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.511 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:45.511 filename1: (groupid=0, jobs=1): err= 0: pid=1663817: Wed May 15 17:14:23 2024 00:29:45.511 read: IOPS=185, BW=742KiB/s (760kB/s)(7424KiB/10008msec) 00:29:45.511 slat (nsec): min=5672, max=53946, avg=6550.02, stdev=2019.15 00:29:45.511 clat (usec): min=763, max=42566, avg=21550.24, stdev=20471.65 00:29:45.511 lat (usec): min=769, max=42594, avg=21556.79, stdev=20471.67 00:29:45.511 clat percentiles (usec): 00:29:45.511 | 1.00th=[ 914], 5.00th=[ 947], 10.00th=[ 963], 20.00th=[ 988], 00:29:45.511 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[41157], 60.00th=[41681], 00:29:45.511 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:45.511 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:29:45.511 | 99.99th=[42730] 00:29:45.511 bw ( KiB/s): min= 672, max= 768, per=66.03%, avg=740.80, stdev=34.86, samples=20 00:29:45.511 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:29:45.511 lat (usec) : 1000=31.63% 00:29:45.511 lat (msec) : 2=18.16%, 50=50.22% 00:29:45.511 cpu : usr=96.74%, sys=2.95%, ctx=81, majf=0, minf=123 00:29:45.511 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:45.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.511 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.511 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:45.511 00:29:45.511 Run status group 0 (all jobs): 00:29:45.511 READ: bw=1121KiB/s (1148kB/s), 381KiB/s-742KiB/s (390kB/s-760kB/s), io=11.0MiB (11.5MB), run=10008-10037msec 00:29:45.511 17:14:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:45.511 17:14:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:45.511 17:14:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:45.511 17:14:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:45.511 17:14:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:45.511 17:14:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:45.511 17:14:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.511 17:14:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.511 00:29:45.511 real 0m11.361s 00:29:45.511 user 0m36.605s 00:29:45.511 sys 0m0.849s 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:45.511 17:14:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 ************************************ 00:29:45.511 END TEST fio_dif_1_multi_subsystems 00:29:45.511 ************************************ 00:29:45.511 17:14:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:45.511 17:14:24 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:45.511 17:14:24 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:45.511 17:14:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 ************************************ 00:29:45.511 START TEST fio_dif_rand_params 00:29:45.511 ************************************ 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 bdev_null0 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 [2024-05-15 17:14:24.105143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:45.511 { 00:29:45.511 "params": { 00:29:45.511 "name": "Nvme$subsystem", 00:29:45.511 "trtype": "$TEST_TRANSPORT", 00:29:45.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:45.511 "adrfam": "ipv4", 00:29:45.511 "trsvcid": "$NVMF_PORT", 00:29:45.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:45.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:45.511 "hdgst": ${hdgst:-false}, 00:29:45.511 "ddgst": ${ddgst:-false} 00:29:45.511 }, 00:29:45.511 "method": "bdev_nvme_attach_controller" 00:29:45.511 } 00:29:45.511 EOF 00:29:45.511 )") 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:45.511 17:14:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:45.512 17:14:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:45.512 "params": { 00:29:45.512 "name": "Nvme0", 00:29:45.512 "trtype": "tcp", 00:29:45.512 "traddr": "10.0.0.2", 00:29:45.512 "adrfam": "ipv4", 00:29:45.512 "trsvcid": "4420", 00:29:45.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:45.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:45.512 "hdgst": false, 00:29:45.512 "ddgst": false 00:29:45.512 }, 00:29:45.512 "method": "bdev_nvme_attach_controller" 00:29:45.512 }' 00:29:45.512 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:45.512 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:45.512 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:45.512 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:45.512 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:45.512 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:45.512 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:45.512 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:45.512 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:45.512 17:14:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:45.771 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:45.771 ... 00:29:45.771 fio-3.35 00:29:45.771 Starting 3 threads 00:29:45.771 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.355 00:29:52.355 filename0: (groupid=0, jobs=1): err= 0: pid=1666226: Wed May 15 17:14:30 2024 00:29:52.355 read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(120MiB/5007msec) 00:29:52.355 slat (nsec): min=5876, max=35657, avg=8885.02, stdev=1192.71 00:29:52.355 clat (usec): min=5841, max=90874, avg=15583.75, stdev=13097.22 00:29:52.355 lat (usec): min=5850, max=90883, avg=15592.63, stdev=13097.34 00:29:52.355 clat percentiles (usec): 00:29:52.355 | 1.00th=[ 6456], 5.00th=[ 7570], 10.00th=[ 8225], 20.00th=[ 9372], 00:29:52.355 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11469], 60.00th=[12387], 00:29:52.355 | 70.00th=[13304], 80.00th=[14484], 90.00th=[47449], 95.00th=[49546], 00:29:52.355 | 99.00th=[53740], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:29:52.355 | 99.99th=[90702] 00:29:52.355 bw ( KiB/s): min= 9472, max=34560, per=32.47%, avg=24576.00, stdev=7852.52, samples=10 00:29:52.355 iops : min= 74, max= 270, avg=192.00, stdev=61.35, samples=10 00:29:52.355 lat (msec) : 10=26.79%, 20=62.62%, 50=6.44%, 100=4.15% 00:29:52.355 cpu : usr=95.92%, sys=3.82%, ctx=11, majf=0, minf=62 00:29:52.355 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.355 issued rwts: total=963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.355 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.355 filename0: (groupid=0, jobs=1): err= 0: pid=1666227: Wed May 15 17:14:30 2024 00:29:52.355 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(127MiB/5003msec) 00:29:52.355 slat (nsec): min=5685, max=36622, avg=6328.18, stdev=1475.87 00:29:52.355 clat (usec): min=5417, max=56102, avg=14747.13, stdev=12691.12 00:29:52.355 lat (usec): min=5423, max=56109, avg=14753.46, stdev=12691.17 00:29:52.355 clat percentiles (usec): 00:29:52.355 | 1.00th=[ 6128], 5.00th=[ 6783], 10.00th=[ 7570], 20.00th=[ 8455], 00:29:52.355 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11207], 00:29:52.355 | 70.00th=[11994], 80.00th=[13173], 90.00th=[47449], 95.00th=[49546], 00:29:52.355 | 99.00th=[52691], 99.50th=[53740], 99.90th=[55313], 99.95th=[56361], 00:29:52.355 | 99.99th=[56361] 00:29:52.355 bw ( KiB/s): min=13824, max=35328, per=33.37%, avg=25258.67, stdev=6036.39, samples=9 00:29:52.355 iops : min= 108, max= 276, avg=197.33, stdev=47.16, samples=9 00:29:52.355 lat (msec) : 10=41.99%, 20=46.51%, 50=7.37%, 100=4.13% 00:29:52.355 cpu : usr=95.90%, sys=3.86%, ctx=35, majf=0, minf=130 00:29:52.355 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.355 issued rwts: total=1017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.355 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.355 filename0: (groupid=0, jobs=1): err= 0: pid=1666228: Wed May 15 17:14:30 2024 00:29:52.355 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(123MiB/5006msec) 00:29:52.355 slat (nsec): min=5728, max=33682, avg=6673.91, stdev=1578.46 00:29:52.355 clat (usec): min=5589, max=92639, avg=15294.89, stdev=13096.84 00:29:52.355 lat (usec): min=5594, max=92645, avg=15301.57, stdev=13097.05 00:29:52.355 clat percentiles (usec): 00:29:52.355 | 1.00th=[ 6390], 5.00th=[ 6980], 10.00th=[ 7767], 20.00th=[ 8848], 00:29:52.355 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11469], 60.00th=[11994], 00:29:52.355 | 70.00th=[12911], 80.00th=[14746], 90.00th=[19792], 95.00th=[49546], 00:29:52.355 | 99.00th=[55837], 99.50th=[87557], 99.90th=[92799], 99.95th=[92799], 00:29:52.355 | 99.99th=[92799] 00:29:52.355 bw ( KiB/s): min=15360, max=34816, per=33.11%, avg=25062.40, stdev=5626.11, samples=10 00:29:52.355 iops : min= 120, max= 272, avg=195.80, stdev=43.95, samples=10 00:29:52.355 lat (msec) : 10=28.95%, 20=61.06%, 50=5.61%, 100=4.38% 00:29:52.355 cpu : usr=95.94%, sys=3.82%, ctx=16, majf=0, minf=128 00:29:52.355 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.355 issued rwts: total=981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.355 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.355 00:29:52.355 Run status group 0 (all jobs): 00:29:52.355 READ: bw=73.9MiB/s (77.5MB/s), 24.0MiB/s-25.4MiB/s (25.2MB/s-26.6MB/s), io=370MiB (388MB), run=5003-5007msec 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:52.355 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 bdev_null0 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 [2024-05-15 17:14:30.293790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 bdev_null1 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 bdev_null2 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.356 { 00:29:52.356 "params": { 00:29:52.356 "name": "Nvme$subsystem", 00:29:52.356 "trtype": "$TEST_TRANSPORT", 00:29:52.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.356 "adrfam": "ipv4", 00:29:52.356 "trsvcid": "$NVMF_PORT", 00:29:52.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.356 "hdgst": ${hdgst:-false}, 00:29:52.356 "ddgst": ${ddgst:-false} 00:29:52.356 }, 00:29:52.356 "method": "bdev_nvme_attach_controller" 00:29:52.356 } 00:29:52.356 EOF 00:29:52.356 )") 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.356 { 00:29:52.356 "params": { 00:29:52.356 "name": "Nvme$subsystem", 00:29:52.356 "trtype": "$TEST_TRANSPORT", 00:29:52.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.356 "adrfam": "ipv4", 00:29:52.356 "trsvcid": "$NVMF_PORT", 00:29:52.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.356 "hdgst": ${hdgst:-false}, 00:29:52.356 "ddgst": ${ddgst:-false} 00:29:52.356 }, 00:29:52.356 "method": "bdev_nvme_attach_controller" 00:29:52.356 } 00:29:52.356 EOF 00:29:52.356 )") 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.356 { 00:29:52.356 "params": { 00:29:52.356 "name": "Nvme$subsystem", 00:29:52.356 "trtype": "$TEST_TRANSPORT", 00:29:52.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.356 "adrfam": "ipv4", 00:29:52.356 "trsvcid": "$NVMF_PORT", 00:29:52.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.356 "hdgst": ${hdgst:-false}, 00:29:52.356 "ddgst": ${ddgst:-false} 00:29:52.356 }, 00:29:52.356 "method": "bdev_nvme_attach_controller" 00:29:52.356 } 00:29:52.356 EOF 00:29:52.356 )") 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:52.356 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:52.357 17:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:52.357 "params": { 00:29:52.357 "name": "Nvme0", 00:29:52.357 "trtype": "tcp", 00:29:52.357 "traddr": "10.0.0.2", 00:29:52.357 "adrfam": "ipv4", 00:29:52.357 "trsvcid": "4420", 00:29:52.357 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:52.357 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:52.357 "hdgst": false, 00:29:52.357 "ddgst": false 00:29:52.357 }, 00:29:52.357 "method": "bdev_nvme_attach_controller" 00:29:52.357 },{ 00:29:52.357 "params": { 00:29:52.357 "name": "Nvme1", 00:29:52.357 "trtype": "tcp", 00:29:52.357 "traddr": "10.0.0.2", 00:29:52.357 "adrfam": "ipv4", 00:29:52.357 "trsvcid": "4420", 00:29:52.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:52.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:52.357 "hdgst": false, 00:29:52.357 "ddgst": false 00:29:52.357 }, 00:29:52.357 "method": "bdev_nvme_attach_controller" 00:29:52.357 },{ 00:29:52.357 "params": { 00:29:52.357 "name": "Nvme2", 00:29:52.357 "trtype": "tcp", 00:29:52.357 "traddr": "10.0.0.2", 00:29:52.357 "adrfam": "ipv4", 00:29:52.357 "trsvcid": "4420", 00:29:52.357 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:52.357 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:52.357 "hdgst": false, 00:29:52.357 "ddgst": false 00:29:52.357 }, 00:29:52.357 "method": "bdev_nvme_attach_controller" 00:29:52.357 }' 00:29:52.357 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:52.357 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:52.357 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.357 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:52.357 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:52.357 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:52.357 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:52.357 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:52.357 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:52.357 17:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.357 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:52.357 ... 00:29:52.357 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:52.357 ... 00:29:52.357 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:52.357 ... 00:29:52.357 fio-3.35 00:29:52.357 Starting 24 threads 00:29:52.357 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.584 00:30:04.584 filename0: (groupid=0, jobs=1): err= 0: pid=1667676: Wed May 15 17:14:42 2024 00:30:04.584 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10007msec) 00:30:04.584 slat (nsec): min=5861, max=77700, avg=8286.99, stdev=4736.39 00:30:04.584 clat (usec): min=7899, max=44109, avg=31900.61, stdev=1946.06 00:30:04.584 lat (usec): min=7916, max=44131, avg=31908.90, stdev=1944.77 00:30:04.584 clat percentiles (usec): 00:30:04.584 | 1.00th=[21627], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.584 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.584 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.584 | 99.00th=[33817], 99.50th=[34341], 99.90th=[36963], 99.95th=[41681], 00:30:04.584 | 99.99th=[44303] 00:30:04.584 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=1996.80, stdev=76.58, samples=20 00:30:04.584 iops : min= 480, max= 544, avg=499.20, stdev=19.14, samples=20 00:30:04.584 lat (msec) : 10=0.32%, 20=0.60%, 50=99.08% 00:30:04.584 cpu : usr=99.22%, sys=0.54%, ctx=13, majf=0, minf=9 00:30:04.584 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:04.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.584 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.584 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.584 filename0: (groupid=0, jobs=1): err= 0: pid=1667677: Wed May 15 17:14:42 2024 00:30:04.584 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10004msec) 00:30:04.584 slat (nsec): min=5876, max=69588, avg=8856.46, stdev=4894.55 00:30:04.584 clat (usec): min=8148, max=38673, avg=31883.68, stdev=1916.74 00:30:04.584 lat (usec): min=8178, max=38681, avg=31892.54, stdev=1915.41 00:30:04.584 clat percentiles (usec): 00:30:04.584 | 1.00th=[23200], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:30:04.584 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.584 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.584 | 99.00th=[33162], 99.50th=[33424], 99.90th=[36963], 99.95th=[36963], 00:30:04.584 | 99.99th=[38536] 00:30:04.584 bw ( KiB/s): min= 1920, max= 2052, per=4.16%, avg=1997.00, stdev=64.51, samples=20 00:30:04.584 iops : min= 480, max= 513, avg=499.25, stdev=16.13, samples=20 00:30:04.584 lat (msec) : 10=0.32%, 20=0.32%, 50=99.36% 00:30:04.584 cpu : usr=98.20%, sys=1.11%, ctx=144, majf=0, minf=9 00:30:04.584 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:04.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.584 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.584 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.584 filename0: (groupid=0, jobs=1): err= 0: pid=1667678: Wed May 15 17:14:42 2024 00:30:04.584 read: IOPS=505, BW=2023KiB/s (2072kB/s)(19.8MiB/10022msec) 00:30:04.584 slat (nsec): min=5979, max=83715, avg=13413.81, stdev=8334.66 00:30:04.584 clat (usec): min=2057, max=39289, avg=31518.85, stdev=3633.99 00:30:04.584 lat (usec): min=2077, max=39300, avg=31532.27, stdev=3632.60 00:30:04.584 clat percentiles (usec): 00:30:04.584 | 1.00th=[ 5735], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.584 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.584 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.584 | 99.00th=[33162], 99.50th=[33424], 99.90th=[36963], 99.95th=[36963], 00:30:04.584 | 99.99th=[39060] 00:30:04.584 bw ( KiB/s): min= 1920, max= 2536, per=4.21%, avg=2021.20, stdev=136.66, samples=20 00:30:04.584 iops : min= 480, max= 634, avg=505.30, stdev=34.17, samples=20 00:30:04.584 lat (msec) : 4=0.45%, 10=1.26%, 20=0.22%, 50=98.07% 00:30:04.584 cpu : usr=98.00%, sys=1.15%, ctx=65, majf=0, minf=0 00:30:04.584 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:04.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.584 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.584 issued rwts: total=5069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.584 filename0: (groupid=0, jobs=1): err= 0: pid=1667679: Wed May 15 17:14:42 2024 00:30:04.584 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10006msec) 00:30:04.584 slat (usec): min=5, max=133, avg=15.22, stdev= 9.81 00:30:04.584 clat (usec): min=17096, max=78429, avg=32044.97, stdev=2180.95 00:30:04.584 lat (usec): min=17105, max=78448, avg=32060.20, stdev=2180.87 00:30:04.584 clat percentiles (usec): 00:30:04.584 | 1.00th=[26346], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.584 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.584 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.584 | 99.00th=[33424], 99.50th=[33817], 99.90th=[63701], 99.95th=[63701], 00:30:04.584 | 99.99th=[78119] 00:30:04.584 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1984.15, stdev=77.30, samples=20 00:30:04.584 iops : min= 448, max= 512, avg=496.00, stdev=19.42, samples=20 00:30:04.584 lat (msec) : 20=0.36%, 50=99.32%, 100=0.32% 00:30:04.584 cpu : usr=99.25%, sys=0.50%, ctx=9, majf=0, minf=9 00:30:04.584 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:04.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.584 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.584 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.584 filename0: (groupid=0, jobs=1): err= 0: pid=1667681: Wed May 15 17:14:42 2024 00:30:04.584 read: IOPS=501, BW=2004KiB/s (2052kB/s)(19.6MiB/10018msec) 00:30:04.584 slat (nsec): min=5841, max=62266, avg=9904.27, stdev=6648.76 00:30:04.584 clat (usec): min=17484, max=43766, avg=31847.35, stdev=2206.92 00:30:04.584 lat (usec): min=17492, max=43773, avg=31857.25, stdev=2207.02 00:30:04.584 clat percentiles (usec): 00:30:04.584 | 1.00th=[21365], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.584 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.584 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.584 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:30:04.584 | 99.99th=[43779] 00:30:04.584 bw ( KiB/s): min= 1920, max= 2224, per=4.17%, avg=2001.15, stdev=84.89, samples=20 00:30:04.584 iops : min= 480, max= 556, avg=500.25, stdev=21.21, samples=20 00:30:04.584 lat (msec) : 20=0.34%, 50=99.66% 00:30:04.584 cpu : usr=98.56%, sys=0.81%, ctx=53, majf=0, minf=9 00:30:04.585 IO depths : 1=5.6%, 2=11.6%, 4=24.2%, 8=51.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:30:04.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 issued rwts: total=5020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.585 filename0: (groupid=0, jobs=1): err= 0: pid=1667682: Wed May 15 17:14:42 2024 00:30:04.585 read: IOPS=496, BW=1987KiB/s (2034kB/s)(19.4MiB/10019msec) 00:30:04.585 slat (nsec): min=5934, max=68659, avg=16585.35, stdev=11615.60 00:30:04.585 clat (usec): min=21122, max=64600, avg=32066.30, stdev=1759.11 00:30:04.585 lat (usec): min=21128, max=64618, avg=32082.88, stdev=1758.57 00:30:04.585 clat percentiles (usec): 00:30:04.585 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.585 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.585 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.585 | 99.00th=[33424], 99.50th=[33424], 99.90th=[58983], 99.95th=[58983], 00:30:04.585 | 99.99th=[64750] 00:30:04.585 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1984.00, stdev=77.69, samples=20 00:30:04.585 iops : min= 448, max= 512, avg=496.00, stdev=19.42, samples=20 00:30:04.585 lat (msec) : 50=99.68%, 100=0.32% 00:30:04.585 cpu : usr=99.13%, sys=0.60%, ctx=10, majf=0, minf=9 00:30:04.585 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:04.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.585 filename0: (groupid=0, jobs=1): err= 0: pid=1667683: Wed May 15 17:14:42 2024 00:30:04.585 read: IOPS=496, BW=1988KiB/s (2036kB/s)(19.4MiB/10013msec) 00:30:04.585 slat (nsec): min=5974, max=81712, avg=21052.56, stdev=12570.07 00:30:04.585 clat (usec): min=20860, max=47522, avg=31985.50, stdev=1176.24 00:30:04.585 lat (usec): min=20889, max=47539, avg=32006.56, stdev=1175.51 00:30:04.585 clat percentiles (usec): 00:30:04.585 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.585 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:30:04.585 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.585 | 99.00th=[33424], 99.50th=[33817], 99.90th=[47449], 99.95th=[47449], 00:30:04.585 | 99.99th=[47449] 00:30:04.585 bw ( KiB/s): min= 1792, max= 2098, per=4.14%, avg=1986.50, stdev=80.61, samples=20 00:30:04.585 iops : min= 448, max= 524, avg=496.60, stdev=20.12, samples=20 00:30:04.585 lat (msec) : 50=100.00% 00:30:04.585 cpu : usr=99.17%, sys=0.51%, ctx=34, majf=0, minf=9 00:30:04.585 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:04.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.585 filename0: (groupid=0, jobs=1): err= 0: pid=1667684: Wed May 15 17:14:42 2024 00:30:04.585 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10002msec) 00:30:04.585 slat (nsec): min=5866, max=59722, avg=16373.52, stdev=10032.73 00:30:04.585 clat (usec): min=17301, max=56693, avg=31997.04, stdev=1741.28 00:30:04.585 lat (usec): min=17308, max=56710, avg=32013.41, stdev=1741.34 00:30:04.585 clat percentiles (usec): 00:30:04.585 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:30:04.585 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:30:04.585 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.585 | 99.00th=[33817], 99.50th=[34341], 99.90th=[56886], 99.95th=[56886], 00:30:04.585 | 99.99th=[56886] 00:30:04.585 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1980.63, stdev=78.31, samples=19 00:30:04.585 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:30:04.585 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:30:04.585 cpu : usr=99.20%, sys=0.53%, ctx=13, majf=0, minf=9 00:30:04.585 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:04.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.585 filename1: (groupid=0, jobs=1): err= 0: pid=1667685: Wed May 15 17:14:42 2024 00:30:04.585 read: IOPS=507, BW=2030KiB/s (2079kB/s)(19.9MiB/10016msec) 00:30:04.585 slat (nsec): min=5832, max=67396, avg=15345.99, stdev=11318.63 00:30:04.585 clat (usec): min=14743, max=74393, avg=31406.42, stdev=4786.19 00:30:04.585 lat (usec): min=14750, max=74412, avg=31421.76, stdev=4787.67 00:30:04.585 clat percentiles (usec): 00:30:04.585 | 1.00th=[20055], 5.00th=[21890], 10.00th=[25297], 20.00th=[31327], 00:30:04.585 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.585 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[38011], 00:30:04.585 | 99.00th=[49546], 99.50th=[52167], 99.90th=[60031], 99.95th=[73925], 00:30:04.585 | 99.99th=[73925] 00:30:04.585 bw ( KiB/s): min= 1792, max= 2224, per=4.22%, avg=2027.20, stdev=106.14, samples=20 00:30:04.585 iops : min= 448, max= 556, avg=506.80, stdev=26.54, samples=20 00:30:04.585 lat (msec) : 20=0.83%, 50=98.51%, 100=0.67% 00:30:04.585 cpu : usr=98.21%, sys=1.19%, ctx=49, majf=0, minf=9 00:30:04.585 IO depths : 1=3.7%, 2=7.6%, 4=17.3%, 8=61.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:30:04.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 complete : 0=0.0%, 4=92.2%, 8=2.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 issued rwts: total=5084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.585 filename1: (groupid=0, jobs=1): err= 0: pid=1667686: Wed May 15 17:14:42 2024 00:30:04.585 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10006msec) 00:30:04.585 slat (nsec): min=5890, max=77310, avg=12150.80, stdev=8025.35 00:30:04.585 clat (usec): min=7916, max=36986, avg=31862.11, stdev=1937.61 00:30:04.585 lat (usec): min=7935, max=37016, avg=31874.26, stdev=1936.27 00:30:04.585 clat percentiles (usec): 00:30:04.585 | 1.00th=[21627], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.585 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.585 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.585 | 99.00th=[33424], 99.50th=[33817], 99.90th=[36963], 99.95th=[36963], 00:30:04.585 | 99.99th=[36963] 00:30:04.585 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1996.80, stdev=64.34, samples=20 00:30:04.585 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:30:04.585 lat (msec) : 10=0.32%, 20=0.64%, 50=99.04% 00:30:04.585 cpu : usr=98.52%, sys=0.86%, ctx=40, majf=0, minf=9 00:30:04.585 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:04.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.585 filename1: (groupid=0, jobs=1): err= 0: pid=1667687: Wed May 15 17:14:42 2024 00:30:04.585 read: IOPS=501, BW=2008KiB/s (2056kB/s)(19.6MiB/10008msec) 00:30:04.585 slat (usec): min=5, max=178, avg=17.23, stdev=10.81 00:30:04.585 clat (usec): min=7562, max=42026, avg=31723.90, stdev=2554.03 00:30:04.585 lat (usec): min=7574, max=42033, avg=31741.13, stdev=2552.27 00:30:04.585 clat percentiles (usec): 00:30:04.585 | 1.00th=[19268], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:30:04.585 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.585 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.585 | 99.00th=[33424], 99.50th=[33817], 99.90th=[36963], 99.95th=[36963], 00:30:04.585 | 99.99th=[42206] 00:30:04.585 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2003.20, stdev=75.15, samples=20 00:30:04.585 iops : min= 480, max= 544, avg=500.80, stdev=18.79, samples=20 00:30:04.585 lat (msec) : 10=0.96%, 20=0.32%, 50=98.73% 00:30:04.585 cpu : usr=98.95%, sys=0.70%, ctx=60, majf=0, minf=9 00:30:04.585 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:04.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.585 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.585 filename1: (groupid=0, jobs=1): err= 0: pid=1667688: Wed May 15 17:14:42 2024 00:30:04.585 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10003msec) 00:30:04.586 slat (nsec): min=5869, max=62892, avg=16452.94, stdev=9168.56 00:30:04.586 clat (usec): min=14327, max=64071, avg=32010.17, stdev=2165.25 00:30:04.586 lat (usec): min=14337, max=64087, avg=32026.62, stdev=2165.09 00:30:04.586 clat percentiles (usec): 00:30:04.586 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.586 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.586 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.586 | 99.00th=[33424], 99.50th=[33424], 99.90th=[64226], 99.95th=[64226], 00:30:04.586 | 99.99th=[64226] 00:30:04.586 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1980.63, stdev=78.31, samples=19 00:30:04.586 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:30:04.586 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:30:04.586 cpu : usr=99.10%, sys=0.63%, ctx=11, majf=0, minf=9 00:30:04.586 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:04.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.586 filename1: (groupid=0, jobs=1): err= 0: pid=1667690: Wed May 15 17:14:42 2024 00:30:04.586 read: IOPS=502, BW=2011KiB/s (2060kB/s)(19.7MiB/10007msec) 00:30:04.586 slat (nsec): min=5830, max=72277, avg=13987.60, stdev=10610.41 00:30:04.586 clat (usec): min=7234, max=68605, avg=31750.38, stdev=4241.11 00:30:04.586 lat (usec): min=7241, max=68624, avg=31764.36, stdev=4240.99 00:30:04.586 clat percentiles (usec): 00:30:04.586 | 1.00th=[20317], 5.00th=[25297], 10.00th=[26346], 20.00th=[29230], 00:30:04.586 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.586 | 70.00th=[32375], 80.00th=[32900], 90.00th=[36439], 95.00th=[38536], 00:30:04.586 | 99.00th=[42730], 99.50th=[50070], 99.90th=[54264], 99.95th=[54264], 00:30:04.586 | 99.99th=[68682] 00:30:04.586 bw ( KiB/s): min= 1859, max= 2112, per=4.18%, avg=2006.55, stdev=53.86, samples=20 00:30:04.586 iops : min= 464, max= 528, avg=501.60, stdev=13.57, samples=20 00:30:04.586 lat (msec) : 10=0.12%, 20=0.87%, 50=98.41%, 100=0.60% 00:30:04.586 cpu : usr=99.23%, sys=0.50%, ctx=11, majf=0, minf=9 00:30:04.586 IO depths : 1=0.6%, 2=1.4%, 4=4.9%, 8=77.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:30:04.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 complete : 0=0.0%, 4=89.5%, 8=8.2%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 issued rwts: total=5032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.586 filename1: (groupid=0, jobs=1): err= 0: pid=1667691: Wed May 15 17:14:42 2024 00:30:04.586 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10008msec) 00:30:04.586 slat (nsec): min=5881, max=83343, avg=20627.14, stdev=13081.58 00:30:04.586 clat (usec): min=12040, max=41874, avg=31883.11, stdev=1494.06 00:30:04.586 lat (usec): min=12046, max=41890, avg=31903.74, stdev=1494.22 00:30:04.586 clat percentiles (usec): 00:30:04.586 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.586 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:30:04.586 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.586 | 99.00th=[33424], 99.50th=[33817], 99.90th=[41681], 99.95th=[41681], 00:30:04.586 | 99.99th=[41681] 00:30:04.586 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1990.10, stdev=64.78, samples=20 00:30:04.586 iops : min= 480, max= 512, avg=497.45, stdev=16.21, samples=20 00:30:04.586 lat (msec) : 20=0.32%, 50=99.68% 00:30:04.586 cpu : usr=98.55%, sys=0.80%, ctx=109, majf=0, minf=9 00:30:04.586 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:04.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.586 filename1: (groupid=0, jobs=1): err= 0: pid=1667692: Wed May 15 17:14:42 2024 00:30:04.586 read: IOPS=498, BW=1995KiB/s (2042kB/s)(19.5MiB/10011msec) 00:30:04.586 slat (nsec): min=5969, max=79480, avg=17070.24, stdev=12102.37 00:30:04.586 clat (usec): min=12271, max=44740, avg=31948.70, stdev=1550.10 00:30:04.586 lat (usec): min=12300, max=44757, avg=31965.77, stdev=1550.03 00:30:04.586 clat percentiles (usec): 00:30:04.586 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.586 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.586 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.586 | 99.00th=[33424], 99.50th=[33817], 99.90th=[44827], 99.95th=[44827], 00:30:04.586 | 99.99th=[44827] 00:30:04.586 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1989.55, stdev=64.65, samples=20 00:30:04.586 iops : min= 480, max= 512, avg=497.35, stdev=16.14, samples=20 00:30:04.586 lat (msec) : 20=0.32%, 50=99.68% 00:30:04.586 cpu : usr=99.08%, sys=0.66%, ctx=9, majf=0, minf=9 00:30:04.586 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:04.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.586 filename1: (groupid=0, jobs=1): err= 0: pid=1667693: Wed May 15 17:14:42 2024 00:30:04.586 read: IOPS=496, BW=1986KiB/s (2033kB/s)(19.4MiB/10024msec) 00:30:04.586 slat (nsec): min=5872, max=82910, avg=15573.25, stdev=13264.48 00:30:04.586 clat (usec): min=20843, max=64327, avg=32090.92, stdev=1992.62 00:30:04.586 lat (usec): min=20850, max=64345, avg=32106.49, stdev=1992.12 00:30:04.586 clat percentiles (usec): 00:30:04.586 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.586 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.586 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.586 | 99.00th=[33424], 99.50th=[33817], 99.90th=[64226], 99.95th=[64226], 00:30:04.586 | 99.99th=[64226] 00:30:04.586 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1984.15, stdev=77.30, samples=20 00:30:04.586 iops : min= 448, max= 512, avg=496.00, stdev=19.42, samples=20 00:30:04.586 lat (msec) : 50=99.68%, 100=0.32% 00:30:04.586 cpu : usr=99.10%, sys=0.62%, ctx=20, majf=0, minf=9 00:30:04.586 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:04.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.586 filename2: (groupid=0, jobs=1): err= 0: pid=1667694: Wed May 15 17:14:42 2024 00:30:04.586 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10005msec) 00:30:04.586 slat (nsec): min=5878, max=64017, avg=17153.44, stdev=10833.19 00:30:04.586 clat (usec): min=9376, max=63969, avg=32020.90, stdev=2705.77 00:30:04.586 lat (usec): min=9383, max=63985, avg=32038.05, stdev=2705.94 00:30:04.586 clat percentiles (usec): 00:30:04.586 | 1.00th=[22676], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.586 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.586 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.586 | 99.00th=[42730], 99.50th=[48497], 99.90th=[63701], 99.95th=[63701], 00:30:04.586 | 99.99th=[64226] 00:30:04.586 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1980.63, stdev=75.72, samples=19 00:30:04.586 iops : min= 448, max= 512, avg=495.16, stdev=18.93, samples=19 00:30:04.586 lat (msec) : 10=0.04%, 20=0.32%, 50=99.20%, 100=0.44% 00:30:04.586 cpu : usr=98.98%, sys=0.75%, ctx=9, majf=0, minf=9 00:30:04.586 IO depths : 1=4.6%, 2=10.8%, 4=24.7%, 8=52.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:30:04.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.586 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.586 filename2: (groupid=0, jobs=1): err= 0: pid=1667695: Wed May 15 17:14:42 2024 00:30:04.586 read: IOPS=511, BW=2046KiB/s (2096kB/s)(20.0MiB/10023msec) 00:30:04.586 slat (nsec): min=5843, max=82784, avg=16366.80, stdev=12676.12 00:30:04.586 clat (usec): min=15458, max=61402, avg=31129.72, stdev=4239.32 00:30:04.586 lat (usec): min=15469, max=61421, avg=31146.09, stdev=4240.59 00:30:04.586 clat percentiles (usec): 00:30:04.586 | 1.00th=[19268], 5.00th=[21627], 10.00th=[25560], 20.00th=[30802], 00:30:04.586 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:30:04.586 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33424], 95.00th=[37487], 00:30:04.586 | 99.00th=[43779], 99.50th=[47973], 99.90th=[50594], 99.95th=[50594], 00:30:04.586 | 99.99th=[61604] 00:30:04.586 bw ( KiB/s): min= 1920, max= 2240, per=4.26%, avg=2044.80, stdev=84.60, samples=20 00:30:04.586 iops : min= 480, max= 560, avg=511.20, stdev=21.15, samples=20 00:30:04.586 lat (msec) : 20=1.81%, 50=97.87%, 100=0.31% 00:30:04.586 cpu : usr=99.16%, sys=0.57%, ctx=21, majf=0, minf=9 00:30:04.587 IO depths : 1=2.9%, 2=6.0%, 4=13.8%, 8=66.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:30:04.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 complete : 0=0.0%, 4=91.3%, 8=4.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 issued rwts: total=5128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.587 filename2: (groupid=0, jobs=1): err= 0: pid=1667696: Wed May 15 17:14:42 2024 00:30:04.587 read: IOPS=498, BW=1995KiB/s (2042kB/s)(19.5MiB/10011msec) 00:30:04.587 slat (nsec): min=5893, max=73483, avg=19725.70, stdev=11519.25 00:30:04.587 clat (usec): min=12073, max=65736, avg=31906.54, stdev=2440.11 00:30:04.587 lat (usec): min=12086, max=65770, avg=31926.26, stdev=2440.30 00:30:04.587 clat percentiles (usec): 00:30:04.587 | 1.00th=[20841], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.587 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.587 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.587 | 99.00th=[43254], 99.50th=[44827], 99.90th=[48497], 99.95th=[49021], 00:30:04.587 | 99.99th=[65799] 00:30:04.587 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1989.55, stdev=64.65, samples=20 00:30:04.587 iops : min= 480, max= 512, avg=497.35, stdev=16.14, samples=20 00:30:04.587 lat (msec) : 20=0.62%, 50=99.34%, 100=0.04% 00:30:04.587 cpu : usr=98.91%, sys=0.76%, ctx=69, majf=0, minf=9 00:30:04.587 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:30:04.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.587 filename2: (groupid=0, jobs=1): err= 0: pid=1667697: Wed May 15 17:14:42 2024 00:30:04.587 read: IOPS=515, BW=2061KiB/s (2111kB/s)(20.1MiB/10005msec) 00:30:04.587 slat (nsec): min=5853, max=76076, avg=14268.42, stdev=9453.10 00:30:04.587 clat (usec): min=12672, max=77173, avg=30942.26, stdev=4733.75 00:30:04.587 lat (usec): min=12693, max=77189, avg=30956.53, stdev=4734.04 00:30:04.587 clat percentiles (usec): 00:30:04.587 | 1.00th=[17171], 5.00th=[22414], 10.00th=[25035], 20.00th=[28443], 00:30:04.587 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:30:04.587 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[38011], 00:30:04.587 | 99.00th=[44827], 99.50th=[47973], 99.90th=[65274], 99.95th=[65274], 00:30:04.587 | 99.99th=[77071] 00:30:04.587 bw ( KiB/s): min= 1920, max= 2288, per=4.28%, avg=2056.15, stdev=97.21, samples=20 00:30:04.587 iops : min= 480, max= 572, avg=514.00, stdev=24.33, samples=20 00:30:04.587 lat (msec) : 20=2.72%, 50=96.97%, 100=0.31% 00:30:04.587 cpu : usr=98.50%, sys=0.83%, ctx=106, majf=0, minf=9 00:30:04.587 IO depths : 1=2.9%, 2=6.1%, 4=14.4%, 8=65.6%, 16=11.0%, 32=0.0%, >=64=0.0% 00:30:04.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 complete : 0=0.0%, 4=91.4%, 8=4.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 issued rwts: total=5156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.587 filename2: (groupid=0, jobs=1): err= 0: pid=1667698: Wed May 15 17:14:42 2024 00:30:04.587 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10012msec) 00:30:04.587 slat (nsec): min=5854, max=57201, avg=14892.63, stdev=9545.56 00:30:04.587 clat (usec): min=7475, max=42086, avg=31840.45, stdev=2265.78 00:30:04.587 lat (usec): min=7488, max=42093, avg=31855.34, stdev=2265.93 00:30:04.587 clat percentiles (usec): 00:30:04.587 | 1.00th=[21365], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.587 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.587 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.587 | 99.00th=[34341], 99.50th=[35914], 99.90th=[41681], 99.95th=[41681], 00:30:04.587 | 99.99th=[42206] 00:30:04.587 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2002.40, stdev=73.46, samples=20 00:30:04.587 iops : min= 480, max= 544, avg=500.60, stdev=18.37, samples=20 00:30:04.587 lat (msec) : 10=0.32%, 20=0.64%, 50=99.04% 00:30:04.587 cpu : usr=99.03%, sys=0.65%, ctx=38, majf=0, minf=9 00:30:04.587 IO depths : 1=5.6%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:30:04.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.587 filename2: (groupid=0, jobs=1): err= 0: pid=1667700: Wed May 15 17:14:42 2024 00:30:04.587 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10008msec) 00:30:04.587 slat (nsec): min=5438, max=55430, avg=16469.24, stdev=9437.59 00:30:04.587 clat (usec): min=25127, max=49147, avg=32028.13, stdev=1155.88 00:30:04.587 lat (usec): min=25133, max=49162, avg=32044.60, stdev=1155.45 00:30:04.587 clat percentiles (usec): 00:30:04.587 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:30:04.587 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.587 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.587 | 99.00th=[33817], 99.50th=[34341], 99.90th=[49021], 99.95th=[49021], 00:30:04.587 | 99.99th=[49021] 00:30:04.587 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1984.00, stdev=65.66, samples=20 00:30:04.587 iops : min= 480, max= 512, avg=496.00, stdev=16.42, samples=20 00:30:04.587 lat (msec) : 50=100.00% 00:30:04.587 cpu : usr=99.28%, sys=0.46%, ctx=9, majf=0, minf=9 00:30:04.587 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:04.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.587 filename2: (groupid=0, jobs=1): err= 0: pid=1667701: Wed May 15 17:14:42 2024 00:30:04.587 read: IOPS=497, BW=1990KiB/s (2037kB/s)(19.4MiB/10004msec) 00:30:04.587 slat (nsec): min=5943, max=65706, avg=18518.64, stdev=11240.35 00:30:04.587 clat (usec): min=14582, max=64046, avg=31983.16, stdev=2157.21 00:30:04.587 lat (usec): min=14595, max=64063, avg=32001.68, stdev=2157.19 00:30:04.587 clat percentiles (usec): 00:30:04.587 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:30:04.587 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:30:04.587 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:30:04.587 | 99.00th=[33162], 99.50th=[33424], 99.90th=[64226], 99.95th=[64226], 00:30:04.587 | 99.99th=[64226] 00:30:04.587 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1980.63, stdev=78.31, samples=19 00:30:04.587 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:30:04.587 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:30:04.587 cpu : usr=98.14%, sys=1.05%, ctx=679, majf=0, minf=9 00:30:04.587 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:04.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.587 filename2: (groupid=0, jobs=1): err= 0: pid=1667702: Wed May 15 17:14:42 2024 00:30:04.587 read: IOPS=498, BW=1995KiB/s (2042kB/s)(19.5MiB/10007msec) 00:30:04.587 slat (nsec): min=5851, max=79207, avg=18307.40, stdev=13249.88 00:30:04.587 clat (usec): min=7039, max=72106, avg=31925.56, stdev=3435.36 00:30:04.587 lat (usec): min=7049, max=72121, avg=31943.86, stdev=3435.41 00:30:04.587 clat percentiles (usec): 00:30:04.587 | 1.00th=[23725], 5.00th=[26608], 10.00th=[31327], 20.00th=[31589], 00:30:04.587 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:04.587 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[34866], 00:30:04.587 | 99.00th=[40109], 99.50th=[42206], 99.90th=[71828], 99.95th=[71828], 00:30:04.587 | 99.99th=[71828] 00:30:04.587 bw ( KiB/s): min= 1792, max= 2160, per=4.15%, avg=1992.00, stdev=83.30, samples=20 00:30:04.587 iops : min= 448, max= 540, avg=498.00, stdev=20.83, samples=20 00:30:04.587 lat (msec) : 10=0.08%, 20=0.32%, 50=99.28%, 100=0.32% 00:30:04.587 cpu : usr=99.07%, sys=0.60%, ctx=59, majf=0, minf=9 00:30:04.587 IO depths : 1=4.1%, 2=8.2%, 4=17.1%, 8=60.8%, 16=9.9%, 32=0.0%, >=64=0.0% 00:30:04.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 complete : 0=0.0%, 4=92.2%, 8=3.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.587 issued rwts: total=4990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:04.587 00:30:04.587 Run status group 0 (all jobs): 00:30:04.587 READ: bw=46.9MiB/s (49.2MB/s), 1986KiB/s-2061KiB/s (2033kB/s-2111kB/s), io=470MiB (493MB), run=10002-10024msec 00:30:04.587 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:04.587 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:04.587 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:04.587 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:04.587 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:04.587 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:04.587 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 bdev_null0 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 [2024-05-15 17:14:42.273102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 bdev_null1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.588 { 00:30:04.588 "params": { 00:30:04.588 "name": "Nvme$subsystem", 00:30:04.588 "trtype": "$TEST_TRANSPORT", 00:30:04.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.588 "adrfam": "ipv4", 00:30:04.588 "trsvcid": "$NVMF_PORT", 00:30:04.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.588 "hdgst": ${hdgst:-false}, 00:30:04.588 "ddgst": ${ddgst:-false} 00:30:04.588 }, 00:30:04.588 "method": "bdev_nvme_attach_controller" 00:30:04.588 } 00:30:04.588 EOF 00:30:04.588 )") 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:04.588 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:04.589 { 00:30:04.589 "params": { 00:30:04.589 "name": "Nvme$subsystem", 00:30:04.589 "trtype": "$TEST_TRANSPORT", 00:30:04.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.589 "adrfam": "ipv4", 00:30:04.589 "trsvcid": "$NVMF_PORT", 00:30:04.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.589 "hdgst": ${hdgst:-false}, 00:30:04.589 "ddgst": ${ddgst:-false} 00:30:04.589 }, 00:30:04.589 "method": "bdev_nvme_attach_controller" 00:30:04.589 } 00:30:04.589 EOF 00:30:04.589 )") 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:04.589 "params": { 00:30:04.589 "name": "Nvme0", 00:30:04.589 "trtype": "tcp", 00:30:04.589 "traddr": "10.0.0.2", 00:30:04.589 "adrfam": "ipv4", 00:30:04.589 "trsvcid": "4420", 00:30:04.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:04.589 "hdgst": false, 00:30:04.589 "ddgst": false 00:30:04.589 }, 00:30:04.589 "method": "bdev_nvme_attach_controller" 00:30:04.589 },{ 00:30:04.589 "params": { 00:30:04.589 "name": "Nvme1", 00:30:04.589 "trtype": "tcp", 00:30:04.589 "traddr": "10.0.0.2", 00:30:04.589 "adrfam": "ipv4", 00:30:04.589 "trsvcid": "4420", 00:30:04.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:04.589 "hdgst": false, 00:30:04.589 "ddgst": false 00:30:04.589 }, 00:30:04.589 "method": "bdev_nvme_attach_controller" 00:30:04.589 }' 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:04.589 17:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:04.589 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:04.589 ... 00:30:04.589 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:04.589 ... 00:30:04.589 fio-3.35 00:30:04.589 Starting 4 threads 00:30:04.589 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.872 00:30:09.872 filename0: (groupid=0, jobs=1): err= 0: pid=1669891: Wed May 15 17:14:48 2024 00:30:09.872 read: IOPS=2081, BW=16.3MiB/s (17.1MB/s)(81.3MiB/5001msec) 00:30:09.872 slat (nsec): min=5669, max=52401, avg=9238.56, stdev=3278.00 00:30:09.872 clat (usec): min=1320, max=8424, avg=3817.22, stdev=667.41 00:30:09.872 lat (usec): min=1328, max=8452, avg=3826.46, stdev=667.41 00:30:09.872 clat percentiles (usec): 00:30:09.872 | 1.00th=[ 2540], 5.00th=[ 2999], 10.00th=[ 3195], 20.00th=[ 3392], 00:30:09.872 | 30.00th=[ 3490], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3752], 00:30:09.872 | 70.00th=[ 3884], 80.00th=[ 4113], 90.00th=[ 4817], 95.00th=[ 5342], 00:30:09.872 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 6587], 99.95th=[ 6718], 00:30:09.872 | 99.99th=[ 8356] 00:30:09.872 bw ( KiB/s): min=16272, max=17424, per=24.75%, avg=16646.40, stdev=339.43, samples=10 00:30:09.872 iops : min= 2034, max= 2178, avg=2080.80, stdev=42.43, samples=10 00:30:09.872 lat (msec) : 2=0.11%, 4=74.16%, 10=25.73% 00:30:09.872 cpu : usr=97.56%, sys=2.10%, ctx=78, majf=0, minf=35 00:30:09.872 IO depths : 1=0.6%, 2=1.6%, 4=71.0%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:09.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.872 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.872 issued rwts: total=10410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:09.872 filename0: (groupid=0, jobs=1): err= 0: pid=1669892: Wed May 15 17:14:48 2024 00:30:09.872 read: IOPS=2059, BW=16.1MiB/s (16.9MB/s)(80.5MiB/5002msec) 00:30:09.872 slat (nsec): min=5659, max=52776, avg=8882.90, stdev=3353.38 00:30:09.872 clat (usec): min=1869, max=7587, avg=3858.81, stdev=699.98 00:30:09.872 lat (usec): min=1875, max=7596, avg=3867.70, stdev=699.99 00:30:09.872 clat percentiles (usec): 00:30:09.872 | 1.00th=[ 2606], 5.00th=[ 3064], 10.00th=[ 3195], 20.00th=[ 3392], 00:30:09.872 | 30.00th=[ 3490], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3752], 00:30:09.872 | 70.00th=[ 3884], 80.00th=[ 4146], 90.00th=[ 5080], 95.00th=[ 5407], 00:30:09.872 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 6587], 99.95th=[ 6915], 00:30:09.872 | 99.99th=[ 7570] 00:30:09.872 bw ( KiB/s): min=16112, max=17056, per=24.50%, avg=16478.40, stdev=285.27, samples=10 00:30:09.872 iops : min= 2014, max= 2132, avg=2059.80, stdev=35.66, samples=10 00:30:09.872 lat (msec) : 2=0.03%, 4=74.27%, 10=25.70% 00:30:09.872 cpu : usr=97.62%, sys=2.10%, ctx=8, majf=0, minf=33 00:30:09.872 IO depths : 1=0.6%, 2=1.4%, 4=71.2%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:09.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.872 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.872 issued rwts: total=10302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:09.872 filename1: (groupid=0, jobs=1): err= 0: pid=1669893: Wed May 15 17:14:48 2024 00:30:09.872 read: IOPS=2174, BW=17.0MiB/s (17.8MB/s)(85.0MiB/5002msec) 00:30:09.872 slat (nsec): min=2881, max=49689, avg=6815.36, stdev=2886.47 00:30:09.872 clat (usec): min=931, max=8748, avg=3660.49, stdev=621.32 00:30:09.872 lat (usec): min=937, max=8759, avg=3667.31, stdev=621.34 00:30:09.872 clat percentiles (usec): 00:30:09.872 | 1.00th=[ 2089], 5.00th=[ 2737], 10.00th=[ 2999], 20.00th=[ 3228], 00:30:09.872 | 30.00th=[ 3425], 40.00th=[ 3523], 50.00th=[ 3621], 60.00th=[ 3752], 00:30:09.872 | 70.00th=[ 3818], 80.00th=[ 4080], 90.00th=[ 4424], 95.00th=[ 4752], 00:30:09.872 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 6325], 99.95th=[ 8717], 00:30:09.872 | 99.99th=[ 8717] 00:30:09.872 bw ( KiB/s): min=15504, max=18288, per=25.87%, avg=17395.20, stdev=798.36, samples=10 00:30:09.872 iops : min= 1938, max= 2286, avg=2174.40, stdev=99.79, samples=10 00:30:09.872 lat (usec) : 1000=0.02% 00:30:09.872 lat (msec) : 2=0.81%, 4=76.11%, 10=23.06% 00:30:09.872 cpu : usr=97.10%, sys=2.66%, ctx=11, majf=0, minf=72 00:30:09.872 IO depths : 1=0.3%, 2=1.3%, 4=69.5%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:09.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.872 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.872 issued rwts: total=10877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:09.872 filename1: (groupid=0, jobs=1): err= 0: pid=1669894: Wed May 15 17:14:48 2024 00:30:09.872 read: IOPS=2090, BW=16.3MiB/s (17.1MB/s)(81.7MiB/5002msec) 00:30:09.872 slat (nsec): min=5702, max=60807, avg=9133.22, stdev=4184.68 00:30:09.872 clat (usec): min=1857, max=7089, avg=3800.34, stdev=664.43 00:30:09.872 lat (usec): min=1863, max=7097, avg=3809.48, stdev=664.64 00:30:09.872 clat percentiles (usec): 00:30:09.872 | 1.00th=[ 2540], 5.00th=[ 2933], 10.00th=[ 3163], 20.00th=[ 3359], 00:30:09.872 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3687], 60.00th=[ 3752], 00:30:09.872 | 70.00th=[ 3884], 80.00th=[ 4113], 90.00th=[ 4752], 95.00th=[ 5276], 00:30:09.872 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6325], 99.95th=[ 6587], 00:30:09.872 | 99.99th=[ 7111] 00:30:09.872 bw ( KiB/s): min=16384, max=17824, per=24.88%, avg=16729.60, stdev=407.52, samples=10 00:30:09.872 iops : min= 2048, max= 2228, avg=2091.20, stdev=50.94, samples=10 00:30:09.872 lat (msec) : 2=0.06%, 4=73.81%, 10=26.13% 00:30:09.872 cpu : usr=97.28%, sys=2.44%, ctx=10, majf=0, minf=50 00:30:09.872 IO depths : 1=0.4%, 2=1.0%, 4=71.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:09.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.873 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.873 issued rwts: total=10459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.873 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:09.873 00:30:09.873 Run status group 0 (all jobs): 00:30:09.873 READ: bw=65.7MiB/s (68.9MB/s), 16.1MiB/s-17.0MiB/s (16.9MB/s-17.8MB/s), io=329MiB (344MB), run=5001-5002msec 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.134 00:30:10.134 real 0m24.706s 00:30:10.134 user 5m18.982s 00:30:10.134 sys 0m3.828s 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:10.134 17:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.134 ************************************ 00:30:10.134 END TEST fio_dif_rand_params 00:30:10.134 ************************************ 00:30:10.134 17:14:48 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:10.134 17:14:48 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:10.134 17:14:48 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:10.134 17:14:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:10.134 ************************************ 00:30:10.134 START TEST fio_dif_digest 00:30:10.134 ************************************ 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:10.134 bdev_null0 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:10.134 [2024-05-15 17:14:48.852209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:10.134 { 00:30:10.134 "params": { 00:30:10.134 "name": "Nvme$subsystem", 00:30:10.134 "trtype": "$TEST_TRANSPORT", 00:30:10.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.134 "adrfam": "ipv4", 00:30:10.134 "trsvcid": "$NVMF_PORT", 00:30:10.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.134 "hdgst": ${hdgst:-false}, 00:30:10.134 "ddgst": ${ddgst:-false} 00:30:10.134 }, 00:30:10.134 "method": "bdev_nvme_attach_controller" 00:30:10.134 } 00:30:10.134 EOF 00:30:10.134 )") 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:10.134 "params": { 00:30:10.134 "name": "Nvme0", 00:30:10.134 "trtype": "tcp", 00:30:10.134 "traddr": "10.0.0.2", 00:30:10.134 "adrfam": "ipv4", 00:30:10.134 "trsvcid": "4420", 00:30:10.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.134 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:10.134 "hdgst": true, 00:30:10.134 "ddgst": true 00:30:10.134 }, 00:30:10.134 "method": "bdev_nvme_attach_controller" 00:30:10.134 }' 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:10.134 17:14:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:10.718 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:10.718 ... 00:30:10.719 fio-3.35 00:30:10.719 Starting 3 threads 00:30:10.719 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.000 00:30:23.000 filename0: (groupid=0, jobs=1): err= 0: pid=1671386: Wed May 15 17:14:59 2024 00:30:23.000 read: IOPS=227, BW=28.5MiB/s (29.8MB/s)(286MiB/10047msec) 00:30:23.000 slat (nsec): min=6040, max=34755, avg=6789.81, stdev=976.75 00:30:23.000 clat (usec): min=8581, max=55615, avg=13145.23, stdev=2156.71 00:30:23.000 lat (usec): min=8588, max=55621, avg=13152.02, stdev=2156.72 00:30:23.000 clat percentiles (usec): 00:30:23.000 | 1.00th=[10028], 5.00th=[11207], 10.00th=[11731], 20.00th=[12256], 00:30:23.000 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:30:23.000 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:30:23.000 | 99.00th=[15533], 99.50th=[15795], 99.90th=[54264], 99.95th=[55313], 00:30:23.000 | 99.99th=[55837] 00:30:23.000 bw ( KiB/s): min=26880, max=30720, per=35.55%, avg=29260.80, stdev=902.62, samples=20 00:30:23.000 iops : min= 210, max= 240, avg=228.60, stdev= 7.05, samples=20 00:30:23.000 lat (msec) : 10=1.01%, 20=98.78%, 50=0.04%, 100=0.17% 00:30:23.000 cpu : usr=95.31%, sys=4.47%, ctx=17, majf=0, minf=108 00:30:23.000 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:23.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.000 issued rwts: total=2288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:23.000 filename0: (groupid=0, jobs=1): err= 0: pid=1671387: Wed May 15 17:14:59 2024 00:30:23.000 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(251MiB/10045msec) 00:30:23.000 slat (nsec): min=6060, max=31442, avg=6808.86, stdev=905.29 00:30:23.000 clat (usec): min=10286, max=58043, avg=15006.96, stdev=3591.23 00:30:23.000 lat (usec): min=10293, max=58050, avg=15013.77, stdev=3591.25 00:30:23.000 clat percentiles (usec): 00:30:23.000 | 1.00th=[12125], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:30:23.000 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[15008], 00:30:23.000 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16319], 95.00th=[16712], 00:30:23.000 | 99.00th=[17957], 99.50th=[55313], 99.90th=[57410], 99.95th=[57934], 00:30:23.000 | 99.99th=[57934] 00:30:23.000 bw ( KiB/s): min=23552, max=26880, per=31.05%, avg=25554.70, stdev=1084.49, samples=20 00:30:23.000 iops : min= 184, max= 210, avg=199.60, stdev= 8.52, samples=20 00:30:23.000 lat (msec) : 20=99.30%, 50=0.05%, 100=0.65% 00:30:23.000 cpu : usr=96.02%, sys=3.75%, ctx=21, majf=0, minf=139 00:30:23.000 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:23.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.000 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:23.000 filename0: (groupid=0, jobs=1): err= 0: pid=1671388: Wed May 15 17:14:59 2024 00:30:23.000 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(271MiB/10002msec) 00:30:23.000 slat (nsec): min=5928, max=42074, avg=6873.04, stdev=1308.67 00:30:23.000 clat (usec): min=8647, max=17572, avg=13831.39, stdev=1209.64 00:30:23.000 lat (usec): min=8654, max=17578, avg=13838.26, stdev=1209.67 00:30:23.000 clat percentiles (usec): 00:30:23.000 | 1.00th=[10159], 5.00th=[11863], 10.00th=[12387], 20.00th=[12911], 00:30:23.000 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:30:23.000 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15795], 00:30:23.000 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17433], 99.95th=[17433], 00:30:23.000 | 99.99th=[17695] 00:30:23.000 bw ( KiB/s): min=26880, max=28928, per=33.69%, avg=27728.84, stdev=677.60, samples=19 00:30:23.000 iops : min= 210, max= 226, avg=216.63, stdev= 5.29, samples=19 00:30:23.000 lat (msec) : 10=0.78%, 20=99.22% 00:30:23.000 cpu : usr=95.80%, sys=3.98%, ctx=19, majf=0, minf=154 00:30:23.000 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:23.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.000 issued rwts: total=2168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.000 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:23.000 00:30:23.000 Run status group 0 (all jobs): 00:30:23.001 READ: bw=80.4MiB/s (84.3MB/s), 24.9MiB/s-28.5MiB/s (26.1MB/s-29.8MB/s), io=808MiB (847MB), run=10002-10047msec 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.001 00:30:23.001 real 0m11.136s 00:30:23.001 user 0m44.120s 00:30:23.001 sys 0m1.563s 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:23.001 17:14:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.001 ************************************ 00:30:23.001 END TEST fio_dif_digest 00:30:23.001 ************************************ 00:30:23.001 17:14:59 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:23.001 17:14:59 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:23.001 17:14:59 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:23.001 17:14:59 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:23.001 17:15:00 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:23.001 17:15:00 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:23.001 17:15:00 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:23.001 17:15:00 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:23.001 rmmod nvme_tcp 00:30:23.001 rmmod nvme_fabrics 00:30:23.001 rmmod nvme_keyring 00:30:23.001 17:15:00 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:23.001 17:15:00 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:23.001 17:15:00 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:23.001 17:15:00 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1661056 ']' 00:30:23.001 17:15:00 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1661056 00:30:23.001 17:15:00 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 1661056 ']' 00:30:23.001 17:15:00 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 1661056 00:30:23.001 17:15:00 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:30:23.001 17:15:00 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:23.001 17:15:00 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1661056 00:30:23.001 17:15:00 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:23.001 17:15:00 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:23.001 17:15:00 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1661056' 00:30:23.001 killing process with pid 1661056 00:30:23.001 17:15:00 nvmf_dif -- common/autotest_common.sh@965 -- # kill 1661056 00:30:23.001 [2024-05-15 17:15:00.149565] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:23.001 17:15:00 nvmf_dif -- common/autotest_common.sh@970 -- # wait 1661056 00:30:23.001 17:15:00 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:23.001 17:15:00 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:24.913 Waiting for block devices as requested 00:30:24.913 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:24.913 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:24.913 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:25.173 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:25.173 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:25.173 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:25.433 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:25.433 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:25.433 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:25.693 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:25.693 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:25.952 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:25.952 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:25.953 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:25.953 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:26.212 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:26.212 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:26.472 17:15:05 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:26.472 17:15:05 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:26.472 17:15:05 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:26.472 17:15:05 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:26.472 17:15:05 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.472 17:15:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:26.472 17:15:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.384 17:15:07 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:28.384 00:30:28.384 real 1m17.407s 00:30:28.384 user 8m6.837s 00:30:28.384 sys 0m19.260s 00:30:28.384 17:15:07 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:28.384 17:15:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:28.384 ************************************ 00:30:28.384 END TEST nvmf_dif 00:30:28.384 ************************************ 00:30:28.644 17:15:07 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:28.644 17:15:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:28.645 17:15:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:28.645 17:15:07 -- common/autotest_common.sh@10 -- # set +x 00:30:28.645 ************************************ 00:30:28.645 START TEST nvmf_abort_qd_sizes 00:30:28.645 ************************************ 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:28.645 * Looking for test storage... 00:30:28.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:30:28.645 17:15:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:36.785 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:36.785 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:36.785 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:36.785 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:36.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:30:36.785 00:30:36.785 --- 10.0.0.2 ping statistics --- 00:30:36.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.785 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:30:36.785 00:30:36.785 --- 10.0.0.1 ping statistics --- 00:30:36.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.785 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:36.785 17:15:14 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:39.327 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:39.327 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1681198 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1681198 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 1681198 ']' 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:39.588 17:15:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:39.588 [2024-05-15 17:15:18.401691] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:30:39.588 [2024-05-15 17:15:18.401751] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.849 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.849 [2024-05-15 17:15:18.473404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:39.849 [2024-05-15 17:15:18.549555] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.849 [2024-05-15 17:15:18.549596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.849 [2024-05-15 17:15:18.549604] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.849 [2024-05-15 17:15:18.549610] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.849 [2024-05-15 17:15:18.549616] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.849 [2024-05-15 17:15:18.549692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.849 [2024-05-15 17:15:18.549814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.849 [2024-05-15 17:15:18.549972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.849 [2024-05-15 17:15:18.549974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:40.418 17:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:40.418 ************************************ 00:30:40.418 START TEST spdk_target_abort 00:30:40.418 ************************************ 00:30:40.418 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:30:40.418 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:40.418 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:30:40.418 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.418 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.989 spdk_targetn1 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.989 [2024-05-15 17:15:19.556495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.989 [2024-05-15 17:15:19.596504] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:40.989 [2024-05-15 17:15:19.596759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:40.989 17:15:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:40.989 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.989 [2024-05-15 17:15:19.745045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:216 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:30:40.990 [2024-05-15 17:15:19.745068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:001e p:1 m:0 dnr:0 00:30:40.990 [2024-05-15 17:15:19.801629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2272 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:30:40.990 [2024-05-15 17:15:19.801650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:40.990 [2024-05-15 17:15:19.803308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2384 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:30:40.990 [2024-05-15 17:15:19.803323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:40.990 [2024-05-15 17:15:19.803596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2408 len:8 PRP1 0x2000078be000 PRP2 0x0 00:30:40.990 [2024-05-15 17:15:19.803607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:40.990 [2024-05-15 17:15:19.809007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2504 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:30:40.990 [2024-05-15 17:15:19.809020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:44.282 Initializing NVMe Controllers 00:30:44.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:44.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:44.282 Initialization complete. Launching workers. 00:30:44.282 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11467, failed: 5 00:30:44.282 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3316, failed to submit 8156 00:30:44.282 success 680, unsuccess 2636, failed 0 00:30:44.282 17:15:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:44.282 17:15:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:44.282 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.282 [2024-05-15 17:15:22.959742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200007c50000 PRP2 0x0 00:30:44.282 [2024-05-15 17:15:22.959793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:30:44.282 [2024-05-15 17:15:22.975710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:656 len:8 PRP1 0x200007c52000 PRP2 0x0 00:30:44.282 [2024-05-15 17:15:22.975735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:30:44.282 [2024-05-15 17:15:23.063684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:2744 len:8 PRP1 0x200007c58000 PRP2 0x0 00:30:44.282 [2024-05-15 17:15:23.063711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.662 [2024-05-15 17:15:24.448733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:35448 len:8 PRP1 0x200007c56000 PRP2 0x0 00:30:45.662 [2024-05-15 17:15:24.448766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:47.571 Initializing NVMe Controllers 00:30:47.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:47.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:47.572 Initialization complete. Launching workers. 00:30:47.572 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8910, failed: 4 00:30:47.572 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1226, failed to submit 7688 00:30:47.572 success 348, unsuccess 878, failed 0 00:30:47.572 17:15:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:47.572 17:15:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:47.572 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.572 [2024-05-15 17:15:26.385030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:161 nsid:1 lba:2656 len:8 PRP1 0x20000790a000 PRP2 0x0 00:30:47.572 [2024-05-15 17:15:26.385074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:161 cdw0:0 sqhd:0021 p:1 m:0 dnr:0 00:30:50.119 [2024-05-15 17:15:28.545290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:142 nsid:1 lba:245064 len:8 PRP1 0x200007908000 PRP2 0x0 00:30:50.119 [2024-05-15 17:15:28.545318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:142 cdw0:0 sqhd:007d p:1 m:0 dnr:0 00:30:50.691 Initializing NVMe Controllers 00:30:50.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:50.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:50.691 Initialization complete. Launching workers. 00:30:50.691 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42035, failed: 2 00:30:50.691 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2846, failed to submit 39191 00:30:50.691 success 596, unsuccess 2250, failed 0 00:30:50.691 17:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:50.691 17:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.691 17:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:50.691 17:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.691 17:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:50.691 17:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.691 17:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1681198 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 1681198 ']' 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 1681198 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1681198 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1681198' 00:30:52.606 killing process with pid 1681198 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 1681198 00:30:52.606 [2024-05-15 17:15:31.297855] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 1681198 00:30:52.606 00:30:52.606 real 0m12.183s 00:30:52.606 user 0m49.526s 00:30:52.606 sys 0m1.762s 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:52.606 17:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.606 ************************************ 00:30:52.606 END TEST spdk_target_abort 00:30:52.606 ************************************ 00:30:52.866 17:15:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:52.866 17:15:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:52.866 17:15:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:52.866 17:15:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:52.866 ************************************ 00:30:52.866 START TEST kernel_target_abort 00:30:52.866 ************************************ 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:52.866 17:15:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:56.170 Waiting for block devices as requested 00:30:56.170 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:56.170 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:56.170 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:56.431 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:56.431 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:56.431 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:56.693 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:56.693 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:56.693 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:56.989 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:56.989 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:56.989 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:57.250 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:57.250 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:57.250 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:57.250 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:57.512 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:57.773 No valid GPT data, bailing 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:30:57.773 00:30:57.773 Discovery Log Number of Records 2, Generation counter 2 00:30:57.773 =====Discovery Log Entry 0====== 00:30:57.773 trtype: tcp 00:30:57.773 adrfam: ipv4 00:30:57.773 subtype: current discovery subsystem 00:30:57.773 treq: not specified, sq flow control disable supported 00:30:57.773 portid: 1 00:30:57.773 trsvcid: 4420 00:30:57.773 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:57.773 traddr: 10.0.0.1 00:30:57.773 eflags: none 00:30:57.773 sectype: none 00:30:57.773 =====Discovery Log Entry 1====== 00:30:57.773 trtype: tcp 00:30:57.773 adrfam: ipv4 00:30:57.773 subtype: nvme subsystem 00:30:57.773 treq: not specified, sq flow control disable supported 00:30:57.773 portid: 1 00:30:57.773 trsvcid: 4420 00:30:57.773 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:57.773 traddr: 10.0.0.1 00:30:57.773 eflags: none 00:30:57.773 sectype: none 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:57.773 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:57.774 17:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:57.774 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.163 Initializing NVMe Controllers 00:31:01.163 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:01.163 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:01.163 Initialization complete. Launching workers. 00:31:01.163 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62903, failed: 0 00:31:01.163 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 62903, failed to submit 0 00:31:01.163 success 0, unsuccess 62903, failed 0 00:31:01.163 17:15:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:01.163 17:15:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:01.163 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.463 Initializing NVMe Controllers 00:31:04.463 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:04.463 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:04.463 Initialization complete. Launching workers. 00:31:04.463 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 104982, failed: 0 00:31:04.463 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26462, failed to submit 78520 00:31:04.463 success 0, unsuccess 26462, failed 0 00:31:04.463 17:15:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:04.463 17:15:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:04.463 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.009 Initializing NVMe Controllers 00:31:07.009 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:07.009 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:07.009 Initialization complete. Launching workers. 00:31:07.009 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100598, failed: 0 00:31:07.009 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25150, failed to submit 75448 00:31:07.009 success 0, unsuccess 25150, failed 0 00:31:07.009 17:15:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:07.009 17:15:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:07.009 17:15:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:07.271 17:15:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:07.271 17:15:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:07.271 17:15:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:07.271 17:15:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:07.271 17:15:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:07.271 17:15:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:07.271 17:15:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:10.577 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:10.577 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:10.577 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:10.577 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:10.577 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:10.577 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:10.577 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:10.577 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:10.577 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:10.577 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:10.577 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:10.577 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:10.839 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:10.839 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:10.839 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:10.839 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:12.754 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:12.754 00:31:12.754 real 0m20.053s 00:31:12.754 user 0m9.462s 00:31:12.754 sys 0m6.042s 00:31:12.754 17:15:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:12.754 17:15:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:12.754 ************************************ 00:31:12.754 END TEST kernel_target_abort 00:31:12.754 ************************************ 00:31:12.754 17:15:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:12.754 17:15:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:12.754 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:12.754 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:12.754 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:12.754 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:12.754 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:12.754 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:12.754 rmmod nvme_tcp 00:31:13.015 rmmod nvme_fabrics 00:31:13.015 rmmod nvme_keyring 00:31:13.015 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:13.015 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:13.015 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:13.015 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1681198 ']' 00:31:13.015 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1681198 00:31:13.015 17:15:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 1681198 ']' 00:31:13.015 17:15:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 1681198 00:31:13.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1681198) - No such process 00:31:13.015 17:15:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 1681198 is not found' 00:31:13.015 Process with pid 1681198 is not found 00:31:13.015 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:13.015 17:15:51 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:16.327 Waiting for block devices as requested 00:31:16.327 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:16.327 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:16.327 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:16.590 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:16.590 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:16.590 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:16.851 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:16.851 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:16.851 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:17.111 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:17.111 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:17.372 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:17.372 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:17.372 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:17.372 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:17.632 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:17.632 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:17.892 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:17.892 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:17.892 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:17.892 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:17.892 17:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.892 17:15:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:17.892 17:15:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.807 17:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:19.807 00:31:19.807 real 0m51.354s 00:31:19.807 user 1m4.251s 00:31:19.807 sys 0m18.261s 00:31:19.807 17:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:19.807 17:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:19.807 ************************************ 00:31:19.807 END TEST nvmf_abort_qd_sizes 00:31:19.807 ************************************ 00:31:20.069 17:15:58 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:20.069 17:15:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:20.069 17:15:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:20.069 17:15:58 -- common/autotest_common.sh@10 -- # set +x 00:31:20.069 ************************************ 00:31:20.069 START TEST keyring_file 00:31:20.069 ************************************ 00:31:20.069 17:15:58 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:20.069 * Looking for test storage... 00:31:20.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:20.069 17:15:58 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:20.069 17:15:58 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.069 17:15:58 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.069 17:15:58 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.069 17:15:58 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.069 17:15:58 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.069 17:15:58 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.069 17:15:58 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.069 17:15:58 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:20.069 17:15:58 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:20.069 17:15:58 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:20.070 17:15:58 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:20.070 17:15:58 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:20.070 17:15:58 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:20.070 17:15:58 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:20.070 17:15:58 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:20.070 17:15:58 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Myqbi24e8l 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Myqbi24e8l 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Myqbi24e8l 00:31:20.070 17:15:58 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Myqbi24e8l 00:31:20.070 17:15:58 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dRAo0SAuC8 00:31:20.070 17:15:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:20.070 17:15:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:20.331 17:15:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dRAo0SAuC8 00:31:20.331 17:15:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dRAo0SAuC8 00:31:20.331 17:15:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.dRAo0SAuC8 00:31:20.331 17:15:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=1691233 00:31:20.331 17:15:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1691233 00:31:20.331 17:15:58 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:20.331 17:15:58 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1691233 ']' 00:31:20.331 17:15:58 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.331 17:15:58 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:20.331 17:15:58 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.331 17:15:58 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:20.331 17:15:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:20.331 [2024-05-15 17:15:58.981755] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:31:20.331 [2024-05-15 17:15:58.981831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691233 ] 00:31:20.331 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.331 [2024-05-15 17:15:59.045873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.331 [2024-05-15 17:15:59.120716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:21.276 17:15:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:21.276 [2024-05-15 17:15:59.750804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.276 null0 00:31:21.276 [2024-05-15 17:15:59.782837] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:21.276 [2024-05-15 17:15:59.782883] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:21.276 [2024-05-15 17:15:59.783192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:21.276 [2024-05-15 17:15:59.790869] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.276 17:15:59 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:21.276 [2024-05-15 17:15:59.806916] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:21.276 request: 00:31:21.276 { 00:31:21.276 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.276 "secure_channel": false, 00:31:21.276 "listen_address": { 00:31:21.276 "trtype": "tcp", 00:31:21.276 "traddr": "127.0.0.1", 00:31:21.276 "trsvcid": "4420" 00:31:21.276 }, 00:31:21.276 "method": "nvmf_subsystem_add_listener", 00:31:21.276 "req_id": 1 00:31:21.276 } 00:31:21.276 Got JSON-RPC error response 00:31:21.276 response: 00:31:21.276 { 00:31:21.276 "code": -32602, 00:31:21.276 "message": "Invalid parameters" 00:31:21.276 } 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:21.276 17:15:59 keyring_file -- keyring/file.sh@46 -- # bperfpid=1691394 00:31:21.276 17:15:59 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1691394 /var/tmp/bperf.sock 00:31:21.276 17:15:59 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1691394 ']' 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:21.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:21.276 17:15:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:21.276 [2024-05-15 17:15:59.863253] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:31:21.276 [2024-05-15 17:15:59.863300] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691394 ] 00:31:21.276 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.276 [2024-05-15 17:15:59.939710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.276 [2024-05-15 17:16:00.004464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.849 17:16:00 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:21.849 17:16:00 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:21.849 17:16:00 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Myqbi24e8l 00:31:21.849 17:16:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Myqbi24e8l 00:31:22.111 17:16:00 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dRAo0SAuC8 00:31:22.111 17:16:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dRAo0SAuC8 00:31:22.111 17:16:00 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:22.111 17:16:00 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:22.111 17:16:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:22.111 17:16:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.111 17:16:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:22.372 17:16:01 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Myqbi24e8l == \/\t\m\p\/\t\m\p\.\M\y\q\b\i\2\4\e\8\l ]] 00:31:22.372 17:16:01 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:22.372 17:16:01 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:22.372 17:16:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:22.372 17:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.372 17:16:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:22.633 17:16:01 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.dRAo0SAuC8 == \/\t\m\p\/\t\m\p\.\d\R\A\o\0\S\A\u\C\8 ]] 00:31:22.633 17:16:01 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:22.633 17:16:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:22.633 17:16:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:22.633 17:16:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:22.633 17:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.633 17:16:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:22.633 17:16:01 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:22.633 17:16:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:22.633 17:16:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:22.633 17:16:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:22.633 17:16:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:22.633 17:16:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:22.633 17:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.893 17:16:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:22.893 17:16:01 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:22.894 17:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:22.894 [2024-05-15 17:16:01.709895] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:23.155 nvme0n1 00:31:23.155 17:16:01 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:23.155 17:16:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:23.155 17:16:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:23.155 17:16:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:23.155 17:16:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:23.155 17:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:23.155 17:16:01 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:23.155 17:16:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:23.155 17:16:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:23.155 17:16:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:23.155 17:16:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:23.155 17:16:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:23.155 17:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:23.415 17:16:02 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:23.415 17:16:02 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:23.415 Running I/O for 1 seconds... 00:31:24.807 00:31:24.807 Latency(us) 00:31:24.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.807 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:24.807 nvme0n1 : 1.16 10382.76 40.56 0.00 0.00 12272.94 6908.59 213210.45 00:31:24.807 =================================================================================================================== 00:31:24.807 Total : 10382.76 40.56 0.00 0.00 12272.94 6908.59 213210.45 00:31:24.807 0 00:31:24.807 17:16:03 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:24.807 17:16:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:24.807 17:16:03 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:24.807 17:16:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:24.807 17:16:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:24.807 17:16:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:24.807 17:16:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:24.807 17:16:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:25.068 17:16:03 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:25.068 17:16:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:25.068 17:16:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:25.068 17:16:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:25.068 17:16:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:25.068 17:16:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:25.068 17:16:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:25.068 17:16:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:25.068 17:16:03 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:25.068 17:16:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:25.068 17:16:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:25.068 17:16:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:25.068 17:16:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:25.068 17:16:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:25.068 17:16:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:25.068 17:16:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:25.068 17:16:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:25.328 [2024-05-15 17:16:04.028076] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:25.328 [2024-05-15 17:16:04.028806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd480b0 (107): Transport endpoint is not connected 00:31:25.328 [2024-05-15 17:16:04.029802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd480b0 (9): Bad file descriptor 00:31:25.328 [2024-05-15 17:16:04.030803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.328 [2024-05-15 17:16:04.030812] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:25.328 [2024-05-15 17:16:04.030817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.328 request: 00:31:25.328 { 00:31:25.328 "name": "nvme0", 00:31:25.328 "trtype": "tcp", 00:31:25.328 "traddr": "127.0.0.1", 00:31:25.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:25.328 "adrfam": "ipv4", 00:31:25.328 "trsvcid": "4420", 00:31:25.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:25.328 "psk": "key1", 00:31:25.328 "method": "bdev_nvme_attach_controller", 00:31:25.328 "req_id": 1 00:31:25.328 } 00:31:25.328 Got JSON-RPC error response 00:31:25.328 response: 00:31:25.328 { 00:31:25.328 "code": -32602, 00:31:25.328 "message": "Invalid parameters" 00:31:25.328 } 00:31:25.328 17:16:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:25.328 17:16:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:25.328 17:16:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:25.328 17:16:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:25.328 17:16:04 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:25.328 17:16:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:25.328 17:16:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:25.328 17:16:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:25.328 17:16:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:25.328 17:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:25.589 17:16:04 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:25.589 17:16:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:25.589 17:16:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:25.589 17:16:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:25.589 17:16:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:25.589 17:16:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:25.589 17:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:25.589 17:16:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:25.589 17:16:04 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:25.589 17:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:25.850 17:16:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:25.850 17:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:25.850 17:16:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:25.850 17:16:04 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:25.850 17:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:26.110 17:16:04 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:26.110 17:16:04 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Myqbi24e8l 00:31:26.110 17:16:04 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Myqbi24e8l 00:31:26.110 17:16:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:26.110 17:16:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Myqbi24e8l 00:31:26.110 17:16:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:26.110 17:16:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:26.110 17:16:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:26.110 17:16:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:26.110 17:16:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Myqbi24e8l 00:31:26.110 17:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Myqbi24e8l 00:31:26.370 [2024-05-15 17:16:04.967560] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Myqbi24e8l': 0100660 00:31:26.370 [2024-05-15 17:16:04.967575] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:26.370 request: 00:31:26.370 { 00:31:26.370 "name": "key0", 00:31:26.370 "path": "/tmp/tmp.Myqbi24e8l", 00:31:26.370 "method": "keyring_file_add_key", 00:31:26.370 "req_id": 1 00:31:26.370 } 00:31:26.370 Got JSON-RPC error response 00:31:26.370 response: 00:31:26.370 { 00:31:26.370 "code": -1, 00:31:26.370 "message": "Operation not permitted" 00:31:26.370 } 00:31:26.370 17:16:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:26.370 17:16:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:26.370 17:16:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:26.370 17:16:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:26.370 17:16:04 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Myqbi24e8l 00:31:26.370 17:16:04 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Myqbi24e8l 00:31:26.370 17:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Myqbi24e8l 00:31:26.370 17:16:05 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Myqbi24e8l 00:31:26.370 17:16:05 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:26.370 17:16:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:26.370 17:16:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:26.370 17:16:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:26.370 17:16:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:26.370 17:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:26.631 17:16:05 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:26.631 17:16:05 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:26.631 17:16:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:26.631 17:16:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:26.631 17:16:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:26.631 17:16:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:26.631 17:16:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:26.631 17:16:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:26.631 17:16:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:26.631 17:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:26.631 [2024-05-15 17:16:05.436740] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Myqbi24e8l': No such file or directory 00:31:26.631 [2024-05-15 17:16:05.436753] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:26.631 [2024-05-15 17:16:05.436769] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:26.631 [2024-05-15 17:16:05.436774] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:26.631 [2024-05-15 17:16:05.436779] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:26.631 request: 00:31:26.631 { 00:31:26.631 "name": "nvme0", 00:31:26.631 "trtype": "tcp", 00:31:26.631 "traddr": "127.0.0.1", 00:31:26.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:26.631 "adrfam": "ipv4", 00:31:26.631 "trsvcid": "4420", 00:31:26.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:26.631 "psk": "key0", 00:31:26.631 "method": "bdev_nvme_attach_controller", 00:31:26.631 "req_id": 1 00:31:26.631 } 00:31:26.631 Got JSON-RPC error response 00:31:26.631 response: 00:31:26.631 { 00:31:26.631 "code": -19, 00:31:26.632 "message": "No such device" 00:31:26.632 } 00:31:26.632 17:16:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:26.632 17:16:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:26.632 17:16:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:26.632 17:16:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:26.632 17:16:05 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:26.632 17:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:26.892 17:16:05 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:26.892 17:16:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:26.892 17:16:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:26.892 17:16:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:26.892 17:16:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:26.892 17:16:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:26.892 17:16:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.g9kIO17RNV 00:31:26.892 17:16:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:26.892 17:16:05 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:26.892 17:16:05 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:26.892 17:16:05 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:26.892 17:16:05 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:26.892 17:16:05 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:26.892 17:16:05 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:26.892 17:16:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.g9kIO17RNV 00:31:26.892 17:16:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.g9kIO17RNV 00:31:26.892 17:16:05 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.g9kIO17RNV 00:31:26.892 17:16:05 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g9kIO17RNV 00:31:26.892 17:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g9kIO17RNV 00:31:27.154 17:16:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:27.154 17:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:27.415 nvme0n1 00:31:27.415 17:16:06 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:27.415 17:16:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:27.415 17:16:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:27.415 17:16:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:27.415 17:16:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:27.415 17:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:27.415 17:16:06 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:27.415 17:16:06 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:27.415 17:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:27.677 17:16:06 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:27.677 17:16:06 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:27.677 17:16:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:27.677 17:16:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:27.677 17:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:27.677 17:16:06 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:27.677 17:16:06 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:27.677 17:16:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:27.677 17:16:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:27.677 17:16:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:27.937 17:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:27.937 17:16:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:27.937 17:16:06 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:27.937 17:16:06 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:27.937 17:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:28.197 17:16:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:28.197 17:16:06 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:28.197 17:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.197 17:16:06 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:28.197 17:16:06 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g9kIO17RNV 00:31:28.197 17:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g9kIO17RNV 00:31:28.458 17:16:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dRAo0SAuC8 00:31:28.458 17:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dRAo0SAuC8 00:31:28.458 17:16:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:28.458 17:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:28.719 nvme0n1 00:31:28.719 17:16:07 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:28.719 17:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:28.983 17:16:07 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:28.983 "subsystems": [ 00:31:28.983 { 00:31:28.983 "subsystem": "keyring", 00:31:28.983 "config": [ 00:31:28.983 { 00:31:28.983 "method": "keyring_file_add_key", 00:31:28.983 "params": { 00:31:28.983 "name": "key0", 00:31:28.983 "path": "/tmp/tmp.g9kIO17RNV" 00:31:28.983 } 00:31:28.983 }, 00:31:28.983 { 00:31:28.983 "method": "keyring_file_add_key", 00:31:28.983 "params": { 00:31:28.983 "name": "key1", 00:31:28.983 "path": "/tmp/tmp.dRAo0SAuC8" 00:31:28.983 } 00:31:28.983 } 00:31:28.983 ] 00:31:28.983 }, 00:31:28.983 { 00:31:28.983 "subsystem": "iobuf", 00:31:28.984 "config": [ 00:31:28.984 { 00:31:28.984 "method": "iobuf_set_options", 00:31:28.984 "params": { 00:31:28.984 "small_pool_count": 8192, 00:31:28.984 "large_pool_count": 1024, 00:31:28.984 "small_bufsize": 8192, 00:31:28.984 "large_bufsize": 135168 00:31:28.984 } 00:31:28.984 } 00:31:28.984 ] 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "subsystem": "sock", 00:31:28.984 "config": [ 00:31:28.984 { 00:31:28.984 "method": "sock_impl_set_options", 00:31:28.984 "params": { 00:31:28.984 "impl_name": "posix", 00:31:28.984 "recv_buf_size": 2097152, 00:31:28.984 "send_buf_size": 2097152, 00:31:28.984 "enable_recv_pipe": true, 00:31:28.984 "enable_quickack": false, 00:31:28.984 "enable_placement_id": 0, 00:31:28.984 "enable_zerocopy_send_server": true, 00:31:28.984 "enable_zerocopy_send_client": false, 00:31:28.984 "zerocopy_threshold": 0, 00:31:28.984 "tls_version": 0, 00:31:28.984 "enable_ktls": false 00:31:28.984 } 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "method": "sock_impl_set_options", 00:31:28.984 "params": { 00:31:28.984 "impl_name": "ssl", 00:31:28.984 "recv_buf_size": 4096, 00:31:28.984 "send_buf_size": 4096, 00:31:28.984 "enable_recv_pipe": true, 00:31:28.984 "enable_quickack": false, 00:31:28.984 "enable_placement_id": 0, 00:31:28.984 "enable_zerocopy_send_server": true, 00:31:28.984 "enable_zerocopy_send_client": false, 00:31:28.984 "zerocopy_threshold": 0, 00:31:28.984 "tls_version": 0, 00:31:28.984 "enable_ktls": false 00:31:28.984 } 00:31:28.984 } 00:31:28.984 ] 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "subsystem": "vmd", 00:31:28.984 "config": [] 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "subsystem": "accel", 00:31:28.984 "config": [ 00:31:28.984 { 00:31:28.984 "method": "accel_set_options", 00:31:28.984 "params": { 00:31:28.984 "small_cache_size": 128, 00:31:28.984 "large_cache_size": 16, 00:31:28.984 "task_count": 2048, 00:31:28.984 "sequence_count": 2048, 00:31:28.984 "buf_count": 2048 00:31:28.984 } 00:31:28.984 } 00:31:28.984 ] 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "subsystem": "bdev", 00:31:28.984 "config": [ 00:31:28.984 { 00:31:28.984 "method": "bdev_set_options", 00:31:28.984 "params": { 00:31:28.984 "bdev_io_pool_size": 65535, 00:31:28.984 "bdev_io_cache_size": 256, 00:31:28.984 "bdev_auto_examine": true, 00:31:28.984 "iobuf_small_cache_size": 128, 00:31:28.984 "iobuf_large_cache_size": 16 00:31:28.984 } 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "method": "bdev_raid_set_options", 00:31:28.984 "params": { 00:31:28.984 "process_window_size_kb": 1024 00:31:28.984 } 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "method": "bdev_iscsi_set_options", 00:31:28.984 "params": { 00:31:28.984 "timeout_sec": 30 00:31:28.984 } 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "method": "bdev_nvme_set_options", 00:31:28.984 "params": { 00:31:28.984 "action_on_timeout": "none", 00:31:28.984 "timeout_us": 0, 00:31:28.984 "timeout_admin_us": 0, 00:31:28.984 "keep_alive_timeout_ms": 10000, 00:31:28.984 "arbitration_burst": 0, 00:31:28.984 "low_priority_weight": 0, 00:31:28.984 "medium_priority_weight": 0, 00:31:28.984 "high_priority_weight": 0, 00:31:28.984 "nvme_adminq_poll_period_us": 10000, 00:31:28.984 "nvme_ioq_poll_period_us": 0, 00:31:28.984 "io_queue_requests": 512, 00:31:28.984 "delay_cmd_submit": true, 00:31:28.984 "transport_retry_count": 4, 00:31:28.984 "bdev_retry_count": 3, 00:31:28.984 "transport_ack_timeout": 0, 00:31:28.984 "ctrlr_loss_timeout_sec": 0, 00:31:28.984 "reconnect_delay_sec": 0, 00:31:28.984 "fast_io_fail_timeout_sec": 0, 00:31:28.984 "disable_auto_failback": false, 00:31:28.984 "generate_uuids": false, 00:31:28.984 "transport_tos": 0, 00:31:28.984 "nvme_error_stat": false, 00:31:28.984 "rdma_srq_size": 0, 00:31:28.984 "io_path_stat": false, 00:31:28.984 "allow_accel_sequence": false, 00:31:28.984 "rdma_max_cq_size": 0, 00:31:28.984 "rdma_cm_event_timeout_ms": 0, 00:31:28.984 "dhchap_digests": [ 00:31:28.984 "sha256", 00:31:28.984 "sha384", 00:31:28.984 "sha512" 00:31:28.984 ], 00:31:28.984 "dhchap_dhgroups": [ 00:31:28.984 "null", 00:31:28.984 "ffdhe2048", 00:31:28.984 "ffdhe3072", 00:31:28.984 "ffdhe4096", 00:31:28.984 "ffdhe6144", 00:31:28.984 "ffdhe8192" 00:31:28.984 ] 00:31:28.984 } 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "method": "bdev_nvme_attach_controller", 00:31:28.984 "params": { 00:31:28.984 "name": "nvme0", 00:31:28.984 "trtype": "TCP", 00:31:28.984 "adrfam": "IPv4", 00:31:28.984 "traddr": "127.0.0.1", 00:31:28.984 "trsvcid": "4420", 00:31:28.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:28.984 "prchk_reftag": false, 00:31:28.984 "prchk_guard": false, 00:31:28.984 "ctrlr_loss_timeout_sec": 0, 00:31:28.984 "reconnect_delay_sec": 0, 00:31:28.984 "fast_io_fail_timeout_sec": 0, 00:31:28.984 "psk": "key0", 00:31:28.984 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:28.984 "hdgst": false, 00:31:28.984 "ddgst": false 00:31:28.984 } 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "method": "bdev_nvme_set_hotplug", 00:31:28.984 "params": { 00:31:28.984 "period_us": 100000, 00:31:28.984 "enable": false 00:31:28.984 } 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "method": "bdev_wait_for_examine" 00:31:28.984 } 00:31:28.984 ] 00:31:28.984 }, 00:31:28.984 { 00:31:28.984 "subsystem": "nbd", 00:31:28.984 "config": [] 00:31:28.984 } 00:31:28.984 ] 00:31:28.984 }' 00:31:28.984 17:16:07 keyring_file -- keyring/file.sh@114 -- # killprocess 1691394 00:31:28.984 17:16:07 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1691394 ']' 00:31:28.984 17:16:07 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1691394 00:31:28.984 17:16:07 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:28.984 17:16:07 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:28.984 17:16:07 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1691394 00:31:28.984 17:16:07 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:28.984 17:16:07 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:28.984 17:16:07 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1691394' 00:31:28.984 killing process with pid 1691394 00:31:28.984 17:16:07 keyring_file -- common/autotest_common.sh@965 -- # kill 1691394 00:31:28.984 Received shutdown signal, test time was about 1.000000 seconds 00:31:28.984 00:31:28.984 Latency(us) 00:31:28.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.984 =================================================================================================================== 00:31:28.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:28.984 17:16:07 keyring_file -- common/autotest_common.sh@970 -- # wait 1691394 00:31:29.246 17:16:07 keyring_file -- keyring/file.sh@117 -- # bperfpid=1693028 00:31:29.246 17:16:07 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1693028 /var/tmp/bperf.sock 00:31:29.246 17:16:07 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1693028 ']' 00:31:29.246 17:16:07 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:29.246 17:16:07 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:29.246 17:16:07 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:29.246 17:16:07 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:29.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:29.246 17:16:07 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:29.246 17:16:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:29.246 17:16:07 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:29.246 "subsystems": [ 00:31:29.246 { 00:31:29.246 "subsystem": "keyring", 00:31:29.246 "config": [ 00:31:29.246 { 00:31:29.246 "method": "keyring_file_add_key", 00:31:29.246 "params": { 00:31:29.246 "name": "key0", 00:31:29.246 "path": "/tmp/tmp.g9kIO17RNV" 00:31:29.246 } 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "method": "keyring_file_add_key", 00:31:29.246 "params": { 00:31:29.246 "name": "key1", 00:31:29.246 "path": "/tmp/tmp.dRAo0SAuC8" 00:31:29.246 } 00:31:29.246 } 00:31:29.246 ] 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "subsystem": "iobuf", 00:31:29.246 "config": [ 00:31:29.246 { 00:31:29.246 "method": "iobuf_set_options", 00:31:29.246 "params": { 00:31:29.246 "small_pool_count": 8192, 00:31:29.246 "large_pool_count": 1024, 00:31:29.246 "small_bufsize": 8192, 00:31:29.246 "large_bufsize": 135168 00:31:29.246 } 00:31:29.246 } 00:31:29.246 ] 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "subsystem": "sock", 00:31:29.246 "config": [ 00:31:29.246 { 00:31:29.246 "method": "sock_impl_set_options", 00:31:29.246 "params": { 00:31:29.246 "impl_name": "posix", 00:31:29.246 "recv_buf_size": 2097152, 00:31:29.246 "send_buf_size": 2097152, 00:31:29.246 "enable_recv_pipe": true, 00:31:29.246 "enable_quickack": false, 00:31:29.246 "enable_placement_id": 0, 00:31:29.246 "enable_zerocopy_send_server": true, 00:31:29.246 "enable_zerocopy_send_client": false, 00:31:29.246 "zerocopy_threshold": 0, 00:31:29.246 "tls_version": 0, 00:31:29.246 "enable_ktls": false 00:31:29.246 } 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "method": "sock_impl_set_options", 00:31:29.246 "params": { 00:31:29.246 "impl_name": "ssl", 00:31:29.246 "recv_buf_size": 4096, 00:31:29.246 "send_buf_size": 4096, 00:31:29.246 "enable_recv_pipe": true, 00:31:29.246 "enable_quickack": false, 00:31:29.246 "enable_placement_id": 0, 00:31:29.246 "enable_zerocopy_send_server": true, 00:31:29.246 "enable_zerocopy_send_client": false, 00:31:29.246 "zerocopy_threshold": 0, 00:31:29.246 "tls_version": 0, 00:31:29.246 "enable_ktls": false 00:31:29.246 } 00:31:29.246 } 00:31:29.246 ] 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "subsystem": "vmd", 00:31:29.246 "config": [] 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "subsystem": "accel", 00:31:29.246 "config": [ 00:31:29.246 { 00:31:29.246 "method": "accel_set_options", 00:31:29.246 "params": { 00:31:29.246 "small_cache_size": 128, 00:31:29.246 "large_cache_size": 16, 00:31:29.246 "task_count": 2048, 00:31:29.246 "sequence_count": 2048, 00:31:29.246 "buf_count": 2048 00:31:29.246 } 00:31:29.246 } 00:31:29.246 ] 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "subsystem": "bdev", 00:31:29.246 "config": [ 00:31:29.246 { 00:31:29.246 "method": "bdev_set_options", 00:31:29.246 "params": { 00:31:29.246 "bdev_io_pool_size": 65535, 00:31:29.246 "bdev_io_cache_size": 256, 00:31:29.246 "bdev_auto_examine": true, 00:31:29.246 "iobuf_small_cache_size": 128, 00:31:29.246 "iobuf_large_cache_size": 16 00:31:29.246 } 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "method": "bdev_raid_set_options", 00:31:29.246 "params": { 00:31:29.246 "process_window_size_kb": 1024 00:31:29.246 } 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "method": "bdev_iscsi_set_options", 00:31:29.246 "params": { 00:31:29.246 "timeout_sec": 30 00:31:29.246 } 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "method": "bdev_nvme_set_options", 00:31:29.246 "params": { 00:31:29.246 "action_on_timeout": "none", 00:31:29.246 "timeout_us": 0, 00:31:29.246 "timeout_admin_us": 0, 00:31:29.246 "keep_alive_timeout_ms": 10000, 00:31:29.246 "arbitration_burst": 0, 00:31:29.246 "low_priority_weight": 0, 00:31:29.246 "medium_priority_weight": 0, 00:31:29.246 "high_priority_weight": 0, 00:31:29.246 "nvme_adminq_poll_period_us": 10000, 00:31:29.246 "nvme_ioq_poll_period_us": 0, 00:31:29.246 "io_queue_requests": 512, 00:31:29.246 "delay_cmd_submit": true, 00:31:29.246 "transport_retry_count": 4, 00:31:29.246 "bdev_retry_count": 3, 00:31:29.246 "transport_ack_timeout": 0, 00:31:29.246 "ctrlr_loss_timeout_sec": 0, 00:31:29.246 "reconnect_delay_sec": 0, 00:31:29.246 "fast_io_fail_timeout_sec": 0, 00:31:29.246 "disable_auto_failback": false, 00:31:29.246 "generate_uuids": false, 00:31:29.246 "transport_tos": 0, 00:31:29.246 "nvme_error_stat": false, 00:31:29.246 "rdma_srq_size": 0, 00:31:29.246 "io_path_stat": false, 00:31:29.246 "allow_accel_sequence": false, 00:31:29.246 "rdma_max_cq_size": 0, 00:31:29.246 "rdma_cm_event_timeout_ms": 0, 00:31:29.246 "dhchap_digests": [ 00:31:29.246 "sha256", 00:31:29.246 "sha384", 00:31:29.246 "sha512" 00:31:29.246 ], 00:31:29.246 "dhchap_dhgroups": [ 00:31:29.246 "null", 00:31:29.246 "ffdhe2048", 00:31:29.246 "ffdhe3072", 00:31:29.246 "ffdhe4096", 00:31:29.246 "ffdhe6144", 00:31:29.246 "ffdhe8192" 00:31:29.246 ] 00:31:29.246 } 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "method": "bdev_nvme_attach_controller", 00:31:29.246 "params": { 00:31:29.246 "name": "nvme0", 00:31:29.246 "trtype": "TCP", 00:31:29.246 "adrfam": "IPv4", 00:31:29.246 "traddr": "127.0.0.1", 00:31:29.246 "trsvcid": "4420", 00:31:29.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:29.246 "prchk_reftag": false, 00:31:29.246 "prchk_guard": false, 00:31:29.246 "ctrlr_loss_timeout_sec": 0, 00:31:29.246 "reconnect_delay_sec": 0, 00:31:29.246 "fast_io_fail_timeout_sec": 0, 00:31:29.246 "psk": "key0", 00:31:29.246 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:29.246 "hdgst": false, 00:31:29.246 "ddgst": false 00:31:29.246 } 00:31:29.246 }, 00:31:29.246 { 00:31:29.246 "method": "bdev_nvme_set_hotplug", 00:31:29.246 "params": { 00:31:29.246 "period_us": 100000, 00:31:29.246 "enable": false 00:31:29.246 } 00:31:29.246 }, 00:31:29.247 { 00:31:29.247 "method": "bdev_wait_for_examine" 00:31:29.247 } 00:31:29.247 ] 00:31:29.247 }, 00:31:29.247 { 00:31:29.247 "subsystem": "nbd", 00:31:29.247 "config": [] 00:31:29.247 } 00:31:29.247 ] 00:31:29.247 }' 00:31:29.247 [2024-05-15 17:16:07.942492] Starting SPDK v24.05-pre git sha1 c7a82f3a8 / DPDK 23.11.0 initialization... 00:31:29.247 [2024-05-15 17:16:07.942552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693028 ] 00:31:29.247 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.247 [2024-05-15 17:16:08.014525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.247 [2024-05-15 17:16:08.067607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.507 [2024-05-15 17:16:08.201333] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:30.080 17:16:08 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:30.080 17:16:08 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:30.080 17:16:08 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:30.080 17:16:08 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:30.080 17:16:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:30.080 17:16:08 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:30.080 17:16:08 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:30.080 17:16:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:30.080 17:16:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:30.080 17:16:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:30.080 17:16:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:30.080 17:16:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:30.379 17:16:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:30.379 17:16:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:30.379 17:16:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:30.379 17:16:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:30.379 17:16:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:30.379 17:16:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:30.379 17:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:30.665 17:16:09 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:30.665 17:16:09 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:30.665 17:16:09 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:30.665 17:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:30.665 17:16:09 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:30.665 17:16:09 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:30.665 17:16:09 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.g9kIO17RNV /tmp/tmp.dRAo0SAuC8 00:31:30.665 17:16:09 keyring_file -- keyring/file.sh@20 -- # killprocess 1693028 00:31:30.665 17:16:09 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1693028 ']' 00:31:30.665 17:16:09 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1693028 00:31:30.665 17:16:09 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:30.665 17:16:09 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:30.665 17:16:09 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1693028 00:31:30.665 17:16:09 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:30.665 17:16:09 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:30.665 17:16:09 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1693028' 00:31:30.665 killing process with pid 1693028 00:31:30.665 17:16:09 keyring_file -- common/autotest_common.sh@965 -- # kill 1693028 00:31:30.665 Received shutdown signal, test time was about 1.000000 seconds 00:31:30.665 00:31:30.665 Latency(us) 00:31:30.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.665 =================================================================================================================== 00:31:30.665 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:30.665 17:16:09 keyring_file -- common/autotest_common.sh@970 -- # wait 1693028 00:31:30.925 17:16:09 keyring_file -- keyring/file.sh@21 -- # killprocess 1691233 00:31:30.925 17:16:09 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1691233 ']' 00:31:30.925 17:16:09 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1691233 00:31:30.925 17:16:09 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:30.925 17:16:09 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:30.925 17:16:09 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1691233 00:31:30.925 17:16:09 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:30.925 17:16:09 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:30.925 17:16:09 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1691233' 00:31:30.925 killing process with pid 1691233 00:31:30.925 17:16:09 keyring_file -- common/autotest_common.sh@965 -- # kill 1691233 00:31:30.925 [2024-05-15 17:16:09.583583] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:30.925 [2024-05-15 17:16:09.583618] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:30.925 17:16:09 keyring_file -- common/autotest_common.sh@970 -- # wait 1691233 00:31:31.186 00:31:31.186 real 0m11.136s 00:31:31.186 user 0m26.361s 00:31:31.186 sys 0m2.566s 00:31:31.186 17:16:09 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:31.186 17:16:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:31.186 ************************************ 00:31:31.186 END TEST keyring_file 00:31:31.186 ************************************ 00:31:31.186 17:16:09 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:31:31.186 17:16:09 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:31:31.186 17:16:09 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:31.186 17:16:09 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:31.186 17:16:09 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:31:31.186 17:16:09 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:31:31.186 17:16:09 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:31:31.186 17:16:09 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:31.186 17:16:09 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:31.186 17:16:09 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:31.186 17:16:09 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:31:31.186 17:16:09 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:31.186 17:16:09 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:31:31.186 17:16:09 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:31.186 17:16:09 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:31.186 17:16:09 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:31.186 17:16:09 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:31:31.186 17:16:09 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:31:31.186 17:16:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:31.186 17:16:09 -- common/autotest_common.sh@10 -- # set +x 00:31:31.186 17:16:09 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:31:31.186 17:16:09 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:31:31.186 17:16:09 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:31:31.186 17:16:09 -- common/autotest_common.sh@10 -- # set +x 00:31:39.324 INFO: APP EXITING 00:31:39.324 INFO: killing all VMs 00:31:39.325 INFO: killing vhost app 00:31:39.325 INFO: EXIT DONE 00:31:41.240 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:31:41.240 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:31:41.240 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:31:41.240 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:31:41.501 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:31:41.501 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:31:41.501 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:31:41.501 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:31:41.501 0000:65:00.0 (144d a80a): Already using the nvme driver 00:31:41.501 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:31:41.501 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:31:41.501 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:31:41.501 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:31:41.501 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:31:41.501 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:31:41.762 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:31:41.762 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:31:45.065 Cleaning 00:31:45.065 Removing: /var/run/dpdk/spdk0/config 00:31:45.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:45.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:45.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:45.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:45.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:45.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:45.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:45.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:45.066 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:45.066 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:45.066 Removing: /var/run/dpdk/spdk1/config 00:31:45.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:45.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:45.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:45.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:45.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:45.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:45.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:45.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:45.327 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:45.327 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:45.327 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:45.327 Removing: /var/run/dpdk/spdk2/config 00:31:45.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:45.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:45.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:45.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:45.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:45.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:45.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:45.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:45.327 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:45.327 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:45.327 Removing: /var/run/dpdk/spdk3/config 00:31:45.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:45.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:45.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:45.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:45.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:45.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:45.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:45.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:45.327 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:45.327 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:45.327 Removing: /var/run/dpdk/spdk4/config 00:31:45.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:45.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:45.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:45.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:45.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:45.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:45.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:45.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:45.327 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:45.327 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:45.327 Removing: /dev/shm/bdev_svc_trace.1 00:31:45.327 Removing: /dev/shm/nvmf_trace.0 00:31:45.327 Removing: /dev/shm/spdk_tgt_trace.pid1243072 00:31:45.327 Removing: /var/run/dpdk/spdk0 00:31:45.327 Removing: /var/run/dpdk/spdk1 00:31:45.327 Removing: /var/run/dpdk/spdk2 00:31:45.327 Removing: /var/run/dpdk/spdk3 00:31:45.327 Removing: /var/run/dpdk/spdk4 00:31:45.327 Removing: /var/run/dpdk/spdk_pid1241455 00:31:45.327 Removing: /var/run/dpdk/spdk_pid1243072 00:31:45.327 Removing: /var/run/dpdk/spdk_pid1243604 00:31:45.327 Removing: /var/run/dpdk/spdk_pid1244797 00:31:45.327 Removing: /var/run/dpdk/spdk_pid1244976 00:31:45.327 Removing: /var/run/dpdk/spdk_pid1246256 00:31:45.327 Removing: /var/run/dpdk/spdk_pid1246369 00:31:45.327 Removing: /var/run/dpdk/spdk_pid1246767 00:31:45.327 Removing: /var/run/dpdk/spdk_pid1247644 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1248395 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1248753 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1249021 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1249316 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1249648 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1250001 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1250350 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1250620 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1251801 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1255048 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1255421 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1255791 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1256046 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1256495 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1256534 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1257152 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1257214 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1257580 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1257765 00:31:45.588 Removing: /var/run/dpdk/spdk_pid1257954 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1258195 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1258719 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1258951 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1259203 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1259525 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1259634 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1259932 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1260173 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1260372 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1260671 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1261026 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1261374 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1261667 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1261855 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1262116 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1262465 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1262821 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1263145 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1263320 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1263563 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1263910 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1264260 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1264613 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1264823 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1265036 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1265361 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1265713 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1266001 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1266651 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1271176 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1324392 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1329993 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1341738 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1348046 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1352825 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1353494 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1367040 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1367050 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1368054 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1369085 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1370146 00:31:45.589 Removing: /var/run/dpdk/spdk_pid1370758 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1370905 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1371120 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1371343 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1371345 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1372336 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1373326 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1374317 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1375091 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1375098 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1375428 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1377294 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1378660 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1388571 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1388917 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1393887 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1400569 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1403507 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1415335 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1425913 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1428473 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1429650 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1449611 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1454204 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1484344 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1489417 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1491390 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1493575 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1493714 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1494050 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1494303 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1494769 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1497049 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1498052 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1498515 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1501185 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1501880 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1502582 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1507589 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1519403 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1524735 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1531863 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1533344 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1535167 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1540218 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1544962 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1553881 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1553883 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1558864 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1559033 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1559233 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1559874 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1559883 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1564978 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1565686 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1570813 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1574129 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1581014 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1587195 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1596963 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1605534 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1605536 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1627462 00:31:45.851 Removing: /var/run/dpdk/spdk_pid1628247 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1628920 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1630049 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1631094 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1631775 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1632455 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1633153 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1638142 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1638457 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1645582 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1645785 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1648300 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1655391 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1655523 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1661328 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1663596 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1665780 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1667247 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1669675 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1670943 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1681347 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1681997 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1682652 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1685562 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1686022 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1686567 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1691233 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1691394 00:31:46.113 Removing: /var/run/dpdk/spdk_pid1693028 00:31:46.113 Clean 00:31:46.113 17:16:24 -- common/autotest_common.sh@1447 -- # return 0 00:31:46.113 17:16:24 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:31:46.113 17:16:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.113 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:31:46.113 17:16:24 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:31:46.113 17:16:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.113 17:16:24 -- common/autotest_common.sh@10 -- # set +x 00:31:46.113 17:16:24 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:46.113 17:16:24 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:46.113 17:16:24 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:46.113 17:16:24 -- spdk/autotest.sh@387 -- # hash lcov 00:31:46.113 17:16:24 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:46.113 17:16:24 -- spdk/autotest.sh@389 -- # hostname 00:31:46.113 17:16:24 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:46.380 geninfo: WARNING: invalid characters removed from testname! 00:32:12.954 17:16:48 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:12.954 17:16:51 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:14.336 17:16:53 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:16.246 17:16:54 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:17.636 17:16:56 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:19.018 17:16:57 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:20.933 17:16:59 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:20.933 17:16:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.933 17:16:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:20.933 17:16:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.933 17:16:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.933 17:16:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.933 17:16:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.933 17:16:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.933 17:16:59 -- paths/export.sh@5 -- $ export PATH 00:32:20.933 17:16:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.933 17:16:59 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:20.933 17:16:59 -- common/autobuild_common.sh@437 -- $ date +%s 00:32:20.933 17:16:59 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715786219.XXXXXX 00:32:20.933 17:16:59 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715786219.jAUN0R 00:32:20.933 17:16:59 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:32:20.933 17:16:59 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:32:20.933 17:16:59 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:20.933 17:16:59 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:20.933 17:16:59 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:20.933 17:16:59 -- common/autobuild_common.sh@453 -- $ get_config_params 00:32:20.933 17:16:59 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:32:20.933 17:16:59 -- common/autotest_common.sh@10 -- $ set +x 00:32:20.933 17:16:59 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:32:20.933 17:16:59 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:32:20.933 17:16:59 -- pm/common@17 -- $ local monitor 00:32:20.933 17:16:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:20.933 17:16:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:20.933 17:16:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:20.933 17:16:59 -- pm/common@21 -- $ date +%s 00:32:20.933 17:16:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:20.933 17:16:59 -- pm/common@21 -- $ date +%s 00:32:20.933 17:16:59 -- pm/common@25 -- $ sleep 1 00:32:20.933 17:16:59 -- pm/common@21 -- $ date +%s 00:32:20.933 17:16:59 -- pm/common@21 -- $ date +%s 00:32:20.933 17:16:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715786219 00:32:20.933 17:16:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715786219 00:32:20.933 17:16:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715786219 00:32:20.933 17:16:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715786219 00:32:20.933 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715786219_collect-vmstat.pm.log 00:32:20.933 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715786219_collect-cpu-load.pm.log 00:32:20.933 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715786219_collect-cpu-temp.pm.log 00:32:20.933 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715786219_collect-bmc-pm.bmc.pm.log 00:32:21.878 17:17:00 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:32:21.878 17:17:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:32:21.878 17:17:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:21.878 17:17:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:21.878 17:17:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:21.878 17:17:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:21.878 17:17:00 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:21.878 17:17:00 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:21.878 17:17:00 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:21.878 17:17:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:21.878 17:17:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:21.878 17:17:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:21.878 17:17:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:21.878 17:17:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:21.878 17:17:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:21.878 17:17:00 -- pm/common@44 -- $ pid=1701602 00:32:21.878 17:17:00 -- pm/common@50 -- $ kill -TERM 1701602 00:32:21.878 17:17:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:21.878 17:17:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:21.878 17:17:00 -- pm/common@44 -- $ pid=1701603 00:32:21.878 17:17:00 -- pm/common@50 -- $ kill -TERM 1701603 00:32:21.878 17:17:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:21.878 17:17:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:21.878 17:17:00 -- pm/common@44 -- $ pid=1701606 00:32:21.878 17:17:00 -- pm/common@50 -- $ kill -TERM 1701606 00:32:21.878 17:17:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:21.878 17:17:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:21.878 17:17:00 -- pm/common@44 -- $ pid=1701626 00:32:21.878 17:17:00 -- pm/common@50 -- $ sudo -E kill -TERM 1701626 00:32:21.878 + [[ -n 1123309 ]] 00:32:21.878 + sudo kill 1123309 00:32:21.948 [Pipeline] } 00:32:21.965 [Pipeline] // stage 00:32:21.970 [Pipeline] } 00:32:21.988 [Pipeline] // timeout 00:32:21.993 [Pipeline] } 00:32:22.010 [Pipeline] // catchError 00:32:22.016 [Pipeline] } 00:32:22.032 [Pipeline] // wrap 00:32:22.038 [Pipeline] } 00:32:22.051 [Pipeline] // catchError 00:32:22.059 [Pipeline] stage 00:32:22.061 [Pipeline] { (Epilogue) 00:32:22.076 [Pipeline] catchError 00:32:22.077 [Pipeline] { 00:32:22.092 [Pipeline] echo 00:32:22.093 Cleanup processes 00:32:22.099 [Pipeline] sh 00:32:22.390 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:22.390 1701708 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:22.390 1702154 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:22.406 [Pipeline] sh 00:32:22.696 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:22.696 ++ grep -v 'sudo pgrep' 00:32:22.696 ++ awk '{print $1}' 00:32:22.696 + sudo kill -9 1701708 00:32:22.709 [Pipeline] sh 00:32:22.997 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:35.246 [Pipeline] sh 00:32:35.533 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:35.533 Artifacts sizes are good 00:32:35.548 [Pipeline] archiveArtifacts 00:32:35.555 Archiving artifacts 00:32:35.744 [Pipeline] sh 00:32:36.031 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:36.047 [Pipeline] cleanWs 00:32:36.057 [WS-CLEANUP] Deleting project workspace... 00:32:36.057 [WS-CLEANUP] Deferred wipeout is used... 00:32:36.064 [WS-CLEANUP] done 00:32:36.066 [Pipeline] } 00:32:36.086 [Pipeline] // catchError 00:32:36.098 [Pipeline] sh 00:32:36.385 + logger -p user.info -t JENKINS-CI 00:32:36.395 [Pipeline] } 00:32:36.408 [Pipeline] // stage 00:32:36.413 [Pipeline] } 00:32:36.432 [Pipeline] // node 00:32:36.437 [Pipeline] End of Pipeline 00:32:36.472 Finished: SUCCESS